00:00:00.000 Started by upstream project "autotest-per-patch" build number 132584 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.133 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.134 The recommended git tool is: git 00:00:00.134 using credential 00000000-0000-0000-0000-000000000002 00:00:00.137 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.174 Fetching changes from the remote Git repository 00:00:00.178 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.208 Using shallow fetch with depth 1 00:00:00.208 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.208 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.259 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.259 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.332 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.344 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.355 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.355 > git config core.sparsecheckout # timeout=10 00:00:07.366 > git read-tree -mu HEAD # timeout=10 00:00:07.383 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.410 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.410 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.557 [Pipeline] Start of Pipeline 00:00:07.573 [Pipeline] library 00:00:07.575 Loading library shm_lib@master 00:00:07.575 Library shm_lib@master is cached. Copying from home. 00:00:07.594 [Pipeline] node 00:00:07.605 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.608 [Pipeline] { 00:00:07.622 [Pipeline] catchError 00:00:07.623 [Pipeline] { 00:00:07.638 [Pipeline] wrap 00:00:07.646 [Pipeline] { 00:00:07.654 [Pipeline] stage 00:00:07.656 [Pipeline] { (Prologue) 00:00:07.998 [Pipeline] sh 00:00:08.289 + logger -p user.info -t JENKINS-CI 00:00:08.304 [Pipeline] echo 00:00:08.305 Node: WFP8 00:00:08.312 [Pipeline] sh 00:00:08.609 [Pipeline] setCustomBuildProperty 00:00:08.624 [Pipeline] echo 00:00:08.626 Cleanup processes 00:00:08.634 [Pipeline] sh 00:00:08.920 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.920 1086978 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.933 [Pipeline] sh 00:00:09.218 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.218 ++ grep -v 'sudo pgrep' 00:00:09.218 ++ awk '{print $1}' 00:00:09.218 + sudo kill -9 00:00:09.218 + true 00:00:09.232 [Pipeline] cleanWs 00:00:09.241 [WS-CLEANUP] Deleting project workspace... 00:00:09.241 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.248 [WS-CLEANUP] done 00:00:09.252 [Pipeline] setCustomBuildProperty 00:00:09.267 [Pipeline] sh 00:00:09.554 + sudo git config --global --replace-all safe.directory '*' 00:00:09.682 [Pipeline] httpRequest 00:00:11.325 [Pipeline] echo 00:00:11.327 Sorcerer 10.211.164.101 is alive 00:00:11.338 [Pipeline] retry 00:00:11.340 [Pipeline] { 00:00:11.355 [Pipeline] httpRequest 00:00:11.359 HttpMethod: GET 00:00:11.360 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.360 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.386 Response Code: HTTP/1.1 200 OK 00:00:11.386 Success: Status code 200 is in the accepted range: 200,404 00:00:11.387 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:37.425 [Pipeline] } 00:00:37.436 [Pipeline] // retry 00:00:37.442 [Pipeline] sh 00:00:37.724 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:37.743 [Pipeline] httpRequest 00:00:38.923 [Pipeline] echo 00:00:38.925 Sorcerer 10.211.164.101 is alive 00:00:38.935 [Pipeline] retry 00:00:38.936 [Pipeline] { 00:00:38.951 [Pipeline] httpRequest 00:00:38.955 HttpMethod: GET 00:00:38.956 URL: http://10.211.164.101/packages/spdk_27aaaa748bf3f42a0ddd13671ec1d2832ed80239.tar.gz 00:00:38.956 Sending request to url: http://10.211.164.101/packages/spdk_27aaaa748bf3f42a0ddd13671ec1d2832ed80239.tar.gz 00:00:38.967 Response Code: HTTP/1.1 200 OK 00:00:38.967 Success: Status code 200 is in the accepted range: 200,404 00:00:38.967 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_27aaaa748bf3f42a0ddd13671ec1d2832ed80239.tar.gz 00:02:33.262 [Pipeline] } 00:02:33.278 [Pipeline] // retry 00:02:33.284 [Pipeline] sh 00:02:33.571 + tar --no-same-owner -xf spdk_27aaaa748bf3f42a0ddd13671ec1d2832ed80239.tar.gz 00:02:36.124 [Pipeline] sh 00:02:36.408 + git -C spdk log --oneline -n5 00:02:36.408 27aaaa748 lib/reduce: Delete logic of persisting old chunk map 00:02:36.408 37db29af3 lib/reduce: Fix an incorrect chunk map index 00:02:36.409 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:36.409 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:36.409 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:02:36.419 [Pipeline] } 00:02:36.436 [Pipeline] // stage 00:02:36.444 [Pipeline] stage 00:02:36.447 [Pipeline] { (Prepare) 00:02:36.463 [Pipeline] writeFile 00:02:36.479 [Pipeline] sh 00:02:36.762 + logger -p user.info -t JENKINS-CI 00:02:36.776 [Pipeline] sh 00:02:37.061 + logger -p user.info -t JENKINS-CI 00:02:37.074 [Pipeline] sh 00:02:37.359 + cat autorun-spdk.conf 00:02:37.359 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:37.359 SPDK_TEST_NVMF=1 00:02:37.359 SPDK_TEST_NVME_CLI=1 00:02:37.359 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:37.359 SPDK_TEST_NVMF_NICS=e810 00:02:37.359 SPDK_TEST_VFIOUSER=1 00:02:37.359 SPDK_RUN_UBSAN=1 00:02:37.359 NET_TYPE=phy 00:02:37.367 RUN_NIGHTLY=0 00:02:37.371 [Pipeline] readFile 00:02:37.396 [Pipeline] withEnv 00:02:37.399 [Pipeline] { 00:02:37.411 [Pipeline] sh 00:02:37.698 + set -ex 00:02:37.698 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:37.698 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:37.698 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:37.698 ++ SPDK_TEST_NVMF=1 00:02:37.698 ++ SPDK_TEST_NVME_CLI=1 00:02:37.698 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:37.698 ++ SPDK_TEST_NVMF_NICS=e810 00:02:37.698 ++ SPDK_TEST_VFIOUSER=1 00:02:37.698 ++ SPDK_RUN_UBSAN=1 00:02:37.698 ++ NET_TYPE=phy 00:02:37.698 ++ RUN_NIGHTLY=0 00:02:37.698 + case $SPDK_TEST_NVMF_NICS in 00:02:37.698 + DRIVERS=ice 00:02:37.698 + [[ tcp == \r\d\m\a ]] 00:02:37.698 + [[ -n ice ]] 00:02:37.698 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:37.698 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:40.992 rmmod: ERROR: Module irdma is not currently loaded 00:02:40.992 rmmod: ERROR: Module i40iw is not currently loaded 00:02:40.992 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:40.992 + true 00:02:40.992 + for D in $DRIVERS 00:02:40.992 + sudo modprobe ice 00:02:40.992 + exit 0 00:02:41.001 [Pipeline] } 00:02:41.014 [Pipeline] // withEnv 00:02:41.019 [Pipeline] } 00:02:41.030 [Pipeline] // stage 00:02:41.039 [Pipeline] catchError 00:02:41.041 [Pipeline] { 00:02:41.055 [Pipeline] timeout 00:02:41.055 Timeout set to expire in 1 hr 0 min 00:02:41.057 [Pipeline] { 00:02:41.071 [Pipeline] stage 00:02:41.073 [Pipeline] { (Tests) 00:02:41.088 [Pipeline] sh 00:02:41.374 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.374 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.374 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.374 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:41.374 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.374 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:41.374 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:41.374 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:41.374 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:41.374 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:41.374 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:41.374 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.374 + source /etc/os-release 00:02:41.374 ++ NAME='Fedora Linux' 00:02:41.374 ++ VERSION='39 (Cloud Edition)' 00:02:41.374 ++ ID=fedora 00:02:41.374 ++ VERSION_ID=39 00:02:41.374 ++ VERSION_CODENAME= 00:02:41.374 ++ PLATFORM_ID=platform:f39 00:02:41.374 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:41.374 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:41.374 ++ LOGO=fedora-logo-icon 00:02:41.374 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:41.374 ++ HOME_URL=https://fedoraproject.org/ 00:02:41.374 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:41.374 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:41.374 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:41.374 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:41.374 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:41.374 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:41.374 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:41.374 ++ SUPPORT_END=2024-11-12 00:02:41.374 ++ VARIANT='Cloud Edition' 00:02:41.374 ++ VARIANT_ID=cloud 00:02:41.374 + uname -a 00:02:41.374 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:41.374 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:43.912 Hugepages 00:02:43.912 node hugesize free / total 00:02:43.912 node0 1048576kB 0 / 0 00:02:43.912 node0 2048kB 1024 / 1024 00:02:43.912 node1 1048576kB 0 / 0 00:02:43.912 node1 2048kB 1024 / 1024 00:02:43.912 00:02:43.912 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:43.912 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:43.912 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:43.912 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:43.912 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:43.912 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:43.912 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:43.912 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:43.912 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:43.912 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:43.912 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:43.912 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:43.912 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:43.912 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:43.912 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:43.912 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:43.912 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:43.912 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:43.912 + rm -f /tmp/spdk-ld-path 00:02:43.912 + source autorun-spdk.conf 00:02:43.912 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.912 ++ SPDK_TEST_NVMF=1 00:02:43.912 ++ SPDK_TEST_NVME_CLI=1 00:02:43.912 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:43.912 ++ SPDK_TEST_NVMF_NICS=e810 00:02:43.912 ++ SPDK_TEST_VFIOUSER=1 00:02:43.912 ++ SPDK_RUN_UBSAN=1 00:02:43.912 ++ NET_TYPE=phy 00:02:43.912 ++ RUN_NIGHTLY=0 00:02:43.912 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:43.912 + [[ -n '' ]] 00:02:43.912 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.912 + for M in /var/spdk/build-*-manifest.txt 00:02:43.912 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:43.912 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:43.912 + for M in /var/spdk/build-*-manifest.txt 00:02:43.912 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:43.912 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:43.912 + for M in /var/spdk/build-*-manifest.txt 00:02:43.912 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:43.912 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:43.912 ++ uname 00:02:43.912 + [[ Linux == \L\i\n\u\x ]] 00:02:43.912 + sudo dmesg -T 00:02:43.912 + sudo dmesg --clear 00:02:43.912 + dmesg_pid=1088450 00:02:43.912 + sudo dmesg -Tw 00:02:43.912 + [[ Fedora Linux == FreeBSD ]] 00:02:43.912 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.912 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.912 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:43.912 + [[ -x /usr/src/fio-static/fio ]] 00:02:43.912 + export FIO_BIN=/usr/src/fio-static/fio 00:02:43.912 + FIO_BIN=/usr/src/fio-static/fio 00:02:43.912 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:43.912 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:43.912 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:43.912 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.912 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.912 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:43.912 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.912 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.912 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:44.172 08:01:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:44.172 08:01:26 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:44.172 08:01:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.172 08:01:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:44.172 08:01:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:44.172 08:01:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:44.172 08:01:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:44.172 08:01:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:44.172 08:01:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:44.172 08:01:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:44.172 08:01:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:44.172 08:01:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:44.172 08:01:26 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:44.172 08:01:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:44.172 08:01:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:44.172 08:01:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:44.172 08:01:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:44.172 08:01:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:44.172 08:01:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:44.172 08:01:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.172 08:01:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.172 08:01:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.172 08:01:26 -- paths/export.sh@5 -- $ export PATH 00:02:44.172 08:01:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.172 08:01:26 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:44.172 08:01:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:44.172 08:01:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732777286.XXXXXX 00:02:44.172 08:01:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732777286.bViHxy 00:02:44.172 08:01:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:44.172 08:01:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:44.172 08:01:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:44.172 08:01:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:44.172 08:01:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:44.172 08:01:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:44.172 08:01:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:44.172 08:01:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.172 08:01:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:44.172 08:01:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:44.172 08:01:26 -- pm/common@17 -- $ local monitor 00:02:44.172 08:01:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.172 08:01:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.172 08:01:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.172 08:01:26 -- pm/common@21 -- $ date +%s 00:02:44.172 08:01:26 -- pm/common@21 -- $ date +%s 00:02:44.172 08:01:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.172 08:01:26 -- pm/common@25 -- $ sleep 1 00:02:44.172 08:01:26 -- pm/common@21 -- $ date +%s 00:02:44.172 08:01:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732777286 00:02:44.172 08:01:26 -- pm/common@21 -- $ date +%s 00:02:44.173 08:01:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732777286 00:02:44.173 08:01:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732777286 00:02:44.173 08:01:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732777286 00:02:44.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732777286_collect-vmstat.pm.log 00:02:44.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732777286_collect-cpu-load.pm.log 00:02:44.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732777286_collect-cpu-temp.pm.log 00:02:44.173 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732777286_collect-bmc-pm.bmc.pm.log 00:02:45.113 08:01:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:45.113 08:01:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:45.113 08:01:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:45.113 08:01:27 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.113 08:01:27 -- spdk/autobuild.sh@16 -- $ date -u 00:02:45.113 Thu Nov 28 07:01:27 AM UTC 2024 00:02:45.113 08:01:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:45.113 v25.01-pre-278-g27aaaa748 00:02:45.113 08:01:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:45.113 08:01:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:45.113 08:01:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:45.113 08:01:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:45.113 08:01:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:45.113 08:01:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.373 ************************************ 00:02:45.373 START TEST ubsan 00:02:45.373 ************************************ 00:02:45.373 08:01:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:45.373 using ubsan 00:02:45.373 00:02:45.373 real 0m0.000s 00:02:45.373 user 0m0.000s 00:02:45.373 sys 0m0.000s 00:02:45.373 08:01:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:45.373 08:01:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:45.373 ************************************ 00:02:45.373 END TEST ubsan 00:02:45.373 ************************************ 00:02:45.373 08:01:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:45.373 08:01:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:45.373 08:01:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:45.373 08:01:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:45.373 08:01:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:45.373 08:01:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:45.373 08:01:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:45.373 08:01:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:45.373 08:01:27 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:45.373 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:45.373 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:45.632 Using 'verbs' RDMA provider 00:02:58.782 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:11.004 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:11.004 Creating mk/config.mk...done. 00:03:11.004 Creating mk/cc.flags.mk...done. 00:03:11.004 Type 'make' to build. 00:03:11.004 08:01:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:11.004 08:01:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:11.004 08:01:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:11.004 08:01:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.004 ************************************ 00:03:11.004 START TEST make 00:03:11.004 ************************************ 00:03:11.004 08:01:52 make -- common/autotest_common.sh@1129 -- $ make -j96 00:03:11.004 make[1]: Nothing to be done for 'all'. 00:03:11.956 The Meson build system 00:03:11.956 Version: 1.5.0 00:03:11.956 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:11.956 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:11.956 Build type: native build 00:03:11.956 Project name: libvfio-user 00:03:11.956 Project version: 0.0.1 00:03:11.956 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:11.957 C linker for the host machine: cc ld.bfd 2.40-14 00:03:11.957 Host machine cpu family: x86_64 00:03:11.957 Host machine cpu: x86_64 00:03:11.957 Run-time dependency threads found: YES 00:03:11.957 Library dl found: YES 00:03:11.957 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:11.957 Run-time dependency json-c found: YES 0.17 00:03:11.957 Run-time dependency cmocka found: YES 1.1.7 00:03:11.957 Program pytest-3 found: NO 00:03:11.957 Program flake8 found: NO 00:03:11.957 Program misspell-fixer found: NO 00:03:11.957 Program restructuredtext-lint found: NO 00:03:11.957 Program valgrind found: YES (/usr/bin/valgrind) 00:03:11.957 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:11.957 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:11.957 Compiler for C supports arguments -Wwrite-strings: YES 00:03:11.957 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:11.957 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:11.957 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:11.957 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:11.957 Build targets in project: 8 00:03:11.957 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:11.957 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:11.957 00:03:11.957 libvfio-user 0.0.1 00:03:11.957 00:03:11.957 User defined options 00:03:11.957 buildtype : debug 00:03:11.957 default_library: shared 00:03:11.957 libdir : /usr/local/lib 00:03:11.957 00:03:11.957 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:12.524 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:12.783 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:12.783 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:12.783 [3/37] Compiling C object samples/null.p/null.c.o 00:03:12.783 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:12.783 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:12.783 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:12.783 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:12.783 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:12.783 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:12.783 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:12.783 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:12.783 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:12.783 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:12.783 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:12.783 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:12.783 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:12.783 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:12.783 [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:12.783 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:12.783 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:12.783 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:12.783 [22/37] Compiling C object samples/server.p/server.c.o 00:03:12.783 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:12.783 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:12.783 [25/37] Compiling C object samples/client.p/client.c.o 00:03:12.783 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:12.783 [27/37] Linking target samples/client 00:03:12.783 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:12.783 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:12.783 [30/37] Linking target test/unit_tests 00:03:12.783 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:13.041 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:13.041 [33/37] Linking target samples/server 00:03:13.041 [34/37] Linking target samples/null 00:03:13.041 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:13.041 [36/37] Linking target samples/lspci 00:03:13.041 [37/37] Linking target samples/gpio-pci-idio-16 00:03:13.041 INFO: autodetecting backend as ninja 00:03:13.041 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.041 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.609 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:13.609 ninja: no work to do. 00:03:18.886 The Meson build system 00:03:18.886 Version: 1.5.0 00:03:18.886 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:18.886 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:18.886 Build type: native build 00:03:18.886 Program cat found: YES (/usr/bin/cat) 00:03:18.886 Project name: DPDK 00:03:18.886 Project version: 24.03.0 00:03:18.886 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:18.886 C linker for the host machine: cc ld.bfd 2.40-14 00:03:18.886 Host machine cpu family: x86_64 00:03:18.886 Host machine cpu: x86_64 00:03:18.886 Message: ## Building in Developer Mode ## 00:03:18.886 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:18.886 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:18.886 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:18.886 Program python3 found: YES (/usr/bin/python3) 00:03:18.886 Program cat found: YES (/usr/bin/cat) 00:03:18.886 Compiler for C supports arguments -march=native: YES 00:03:18.886 Checking for size of "void *" : 8 00:03:18.886 Checking for size of "void *" : 8 (cached) 00:03:18.886 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:18.886 Library m found: YES 00:03:18.886 Library numa found: YES 00:03:18.886 Has header "numaif.h" : YES 00:03:18.886 Library fdt found: NO 00:03:18.886 Library execinfo found: NO 00:03:18.886 Has header "execinfo.h" : YES 00:03:18.886 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:18.886 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:18.886 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:18.886 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:18.886 Run-time dependency openssl found: YES 3.1.1 00:03:18.886 Run-time dependency libpcap found: YES 1.10.4 00:03:18.886 Has header "pcap.h" with dependency libpcap: YES 00:03:18.886 Compiler for C supports arguments -Wcast-qual: YES 00:03:18.886 Compiler for C supports arguments -Wdeprecated: YES 00:03:18.886 Compiler for C supports arguments -Wformat: YES 00:03:18.886 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:18.886 Compiler for C supports arguments -Wformat-security: NO 00:03:18.886 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:18.886 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:18.886 Compiler for C supports arguments -Wnested-externs: YES 00:03:18.886 Compiler for C supports arguments -Wold-style-definition: YES 00:03:18.886 Compiler for C supports arguments -Wpointer-arith: YES 00:03:18.886 Compiler for C supports arguments -Wsign-compare: YES 00:03:18.886 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:18.886 Compiler for C supports arguments -Wundef: YES 00:03:18.886 Compiler for C supports arguments -Wwrite-strings: YES 00:03:18.886 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:18.886 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:18.886 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:18.886 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:18.886 Program objdump found: YES (/usr/bin/objdump) 00:03:18.886 Compiler for C supports arguments -mavx512f: YES 00:03:18.886 Checking if "AVX512 checking" compiles: YES 00:03:18.886 Fetching value of define "__SSE4_2__" : 1 00:03:18.886 Fetching value of define "__AES__" : 1 00:03:18.886 Fetching value of define "__AVX__" : 1 00:03:18.886 Fetching value of define "__AVX2__" : 1 00:03:18.886 Fetching value of define "__AVX512BW__" : 1 00:03:18.886 Fetching value of define "__AVX512CD__" : 1 00:03:18.886 Fetching value of define "__AVX512DQ__" : 1 00:03:18.886 Fetching value of define "__AVX512F__" : 1 00:03:18.886 Fetching value of define "__AVX512VL__" : 1 00:03:18.886 Fetching value of define "__PCLMUL__" : 1 00:03:18.886 Fetching value of define "__RDRND__" : 1 00:03:18.886 Fetching value of define "__RDSEED__" : 1 00:03:18.886 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:18.886 Fetching value of define "__znver1__" : (undefined) 00:03:18.886 Fetching value of define "__znver2__" : (undefined) 00:03:18.886 Fetching value of define "__znver3__" : (undefined) 00:03:18.886 Fetching value of define "__znver4__" : (undefined) 00:03:18.886 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:18.886 Message: lib/log: Defining dependency "log" 00:03:18.886 Message: lib/kvargs: Defining dependency "kvargs" 00:03:18.886 Message: lib/telemetry: Defining dependency "telemetry" 00:03:18.886 Checking for function "getentropy" : NO 00:03:18.886 Message: lib/eal: Defining dependency "eal" 00:03:18.886 Message: lib/ring: Defining dependency "ring" 00:03:18.886 Message: lib/rcu: Defining dependency "rcu" 00:03:18.886 Message: lib/mempool: Defining dependency "mempool" 00:03:18.886 Message: lib/mbuf: Defining dependency "mbuf" 00:03:18.886 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:18.886 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:18.886 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:18.886 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:18.886 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:18.886 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:18.886 Compiler for C supports arguments -mpclmul: YES 00:03:18.886 Compiler for C supports arguments -maes: YES 00:03:18.886 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:18.886 Compiler for C supports arguments -mavx512bw: YES 00:03:18.886 Compiler for C supports arguments -mavx512dq: YES 00:03:18.886 Compiler for C supports arguments -mavx512vl: YES 00:03:18.886 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:18.886 Compiler for C supports arguments -mavx2: YES 00:03:18.886 Compiler for C supports arguments -mavx: YES 00:03:18.886 Message: lib/net: Defining dependency "net" 00:03:18.887 Message: lib/meter: Defining dependency "meter" 00:03:18.887 Message: lib/ethdev: Defining dependency "ethdev" 00:03:18.887 Message: lib/pci: Defining dependency "pci" 00:03:18.887 Message: lib/cmdline: Defining dependency "cmdline" 00:03:18.887 Message: lib/hash: Defining dependency "hash" 00:03:18.887 Message: lib/timer: Defining dependency "timer" 00:03:18.887 Message: lib/compressdev: Defining dependency "compressdev" 00:03:18.887 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:18.887 Message: lib/dmadev: Defining dependency "dmadev" 00:03:18.887 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:18.887 Message: lib/power: Defining dependency "power" 00:03:18.887 Message: lib/reorder: Defining dependency "reorder" 00:03:18.887 Message: lib/security: Defining dependency "security" 00:03:18.887 Has header "linux/userfaultfd.h" : YES 00:03:18.887 Has header "linux/vduse.h" : YES 00:03:18.887 Message: lib/vhost: Defining dependency "vhost" 00:03:18.887 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:18.887 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:18.887 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:18.887 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:18.887 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:18.887 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:18.887 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:18.887 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:18.887 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:18.887 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:18.887 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:18.887 Configuring doxy-api-html.conf using configuration 00:03:18.887 Configuring doxy-api-man.conf using configuration 00:03:18.887 Program mandb found: YES (/usr/bin/mandb) 00:03:18.887 Program sphinx-build found: NO 00:03:18.887 Configuring rte_build_config.h using configuration 00:03:18.887 Message: 00:03:18.887 ================= 00:03:18.887 Applications Enabled 00:03:18.887 ================= 00:03:18.887 00:03:18.887 apps: 00:03:18.887 00:03:18.887 00:03:18.887 Message: 00:03:18.887 ================= 00:03:18.887 Libraries Enabled 00:03:18.887 ================= 00:03:18.887 00:03:18.887 libs: 00:03:18.887 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:18.887 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:18.887 cryptodev, dmadev, power, reorder, security, vhost, 00:03:18.887 00:03:18.887 Message: 00:03:18.887 =============== 00:03:18.887 Drivers Enabled 00:03:18.887 =============== 00:03:18.887 00:03:18.887 common: 00:03:18.887 00:03:18.887 bus: 00:03:18.887 pci, vdev, 00:03:18.887 mempool: 00:03:18.887 ring, 00:03:18.887 dma: 00:03:18.887 00:03:18.887 net: 00:03:18.887 00:03:18.887 crypto: 00:03:18.887 00:03:18.887 compress: 00:03:18.887 00:03:18.887 vdpa: 00:03:18.887 00:03:18.887 00:03:18.887 Message: 00:03:18.887 ================= 00:03:18.887 Content Skipped 00:03:18.887 ================= 00:03:18.887 00:03:18.887 apps: 00:03:18.887 dumpcap: explicitly disabled via build config 00:03:18.887 graph: explicitly disabled via build config 00:03:18.887 pdump: explicitly disabled via build config 00:03:18.887 proc-info: explicitly disabled via build config 00:03:18.887 test-acl: explicitly disabled via build config 00:03:18.887 test-bbdev: explicitly disabled via build config 00:03:18.887 test-cmdline: explicitly disabled via build config 00:03:18.887 test-compress-perf: explicitly disabled via build config 00:03:18.887 test-crypto-perf: explicitly disabled via build config 00:03:18.887 test-dma-perf: explicitly disabled via build config 00:03:18.887 test-eventdev: explicitly disabled via build config 00:03:18.887 test-fib: explicitly disabled via build config 00:03:18.887 test-flow-perf: explicitly disabled via build config 00:03:18.887 test-gpudev: explicitly disabled via build config 00:03:18.887 test-mldev: explicitly disabled via build config 00:03:18.887 test-pipeline: explicitly disabled via build config 00:03:18.887 test-pmd: explicitly disabled via build config 00:03:18.887 test-regex: explicitly disabled via build config 00:03:18.887 test-sad: explicitly disabled via build config 00:03:18.887 test-security-perf: explicitly disabled via build config 00:03:18.887 00:03:18.887 libs: 00:03:18.887 argparse: explicitly disabled via build config 00:03:18.887 metrics: explicitly disabled via build config 00:03:18.887 acl: explicitly disabled via build config 00:03:18.887 bbdev: explicitly disabled via build config 00:03:18.887 bitratestats: explicitly disabled via build config 00:03:18.887 bpf: explicitly disabled via build config 00:03:18.887 cfgfile: explicitly disabled via build config 00:03:18.887 distributor: explicitly disabled via build config 00:03:18.887 efd: explicitly disabled via build config 00:03:18.887 eventdev: explicitly disabled via build config 00:03:18.887 dispatcher: explicitly disabled via build config 00:03:18.887 gpudev: explicitly disabled via build config 00:03:18.887 gro: explicitly disabled via build config 00:03:18.887 gso: explicitly disabled via build config 00:03:18.887 ip_frag: explicitly disabled via build config 00:03:18.887 jobstats: explicitly disabled via build config 00:03:18.887 latencystats: explicitly disabled via build config 00:03:18.887 lpm: explicitly disabled via build config 00:03:18.887 member: explicitly disabled via build config 00:03:18.887 pcapng: explicitly disabled via build config 00:03:18.887 rawdev: explicitly disabled via build config 00:03:18.887 regexdev: explicitly disabled via build config 00:03:18.887 mldev: explicitly disabled via build config 00:03:18.887 rib: explicitly disabled via build config 00:03:18.887 sched: explicitly disabled via build config 00:03:18.887 stack: explicitly disabled via build config 00:03:18.887 ipsec: explicitly disabled via build config 00:03:18.887 pdcp: explicitly disabled via build config 00:03:18.887 fib: explicitly disabled via build config 00:03:18.887 port: explicitly disabled via build config 00:03:18.887 pdump: explicitly disabled via build config 00:03:18.887 table: explicitly disabled via build config 00:03:18.887 pipeline: explicitly disabled via build config 00:03:18.887 graph: explicitly disabled via build config 00:03:18.887 node: explicitly disabled via build config 00:03:18.887 00:03:18.887 drivers: 00:03:18.887 common/cpt: not in enabled drivers build config 00:03:18.887 common/dpaax: not in enabled drivers build config 00:03:18.887 common/iavf: not in enabled drivers build config 00:03:18.887 common/idpf: not in enabled drivers build config 00:03:18.887 common/ionic: not in enabled drivers build config 00:03:18.887 common/mvep: not in enabled drivers build config 00:03:18.887 common/octeontx: not in enabled drivers build config 00:03:18.887 bus/auxiliary: not in enabled drivers build config 00:03:18.887 bus/cdx: not in enabled drivers build config 00:03:18.887 bus/dpaa: not in enabled drivers build config 00:03:18.887 bus/fslmc: not in enabled drivers build config 00:03:18.887 bus/ifpga: not in enabled drivers build config 00:03:18.887 bus/platform: not in enabled drivers build config 00:03:18.887 bus/uacce: not in enabled drivers build config 00:03:18.887 bus/vmbus: not in enabled drivers build config 00:03:18.887 common/cnxk: not in enabled drivers build config 00:03:18.887 common/mlx5: not in enabled drivers build config 00:03:18.887 common/nfp: not in enabled drivers build config 00:03:18.887 common/nitrox: not in enabled drivers build config 00:03:18.887 common/qat: not in enabled drivers build config 00:03:18.887 common/sfc_efx: not in enabled drivers build config 00:03:18.887 mempool/bucket: not in enabled drivers build config 00:03:18.887 mempool/cnxk: not in enabled drivers build config 00:03:18.887 mempool/dpaa: not in enabled drivers build config 00:03:18.887 mempool/dpaa2: not in enabled drivers build config 00:03:18.887 mempool/octeontx: not in enabled drivers build config 00:03:18.887 mempool/stack: not in enabled drivers build config 00:03:18.887 dma/cnxk: not in enabled drivers build config 00:03:18.887 dma/dpaa: not in enabled drivers build config 00:03:18.887 dma/dpaa2: not in enabled drivers build config 00:03:18.887 dma/hisilicon: not in enabled drivers build config 00:03:18.887 dma/idxd: not in enabled drivers build config 00:03:18.887 dma/ioat: not in enabled drivers build config 00:03:18.887 dma/skeleton: not in enabled drivers build config 00:03:18.887 net/af_packet: not in enabled drivers build config 00:03:18.887 net/af_xdp: not in enabled drivers build config 00:03:18.887 net/ark: not in enabled drivers build config 00:03:18.887 net/atlantic: not in enabled drivers build config 00:03:18.887 net/avp: not in enabled drivers build config 00:03:18.887 net/axgbe: not in enabled drivers build config 00:03:18.887 net/bnx2x: not in enabled drivers build config 00:03:18.887 net/bnxt: not in enabled drivers build config 00:03:18.887 net/bonding: not in enabled drivers build config 00:03:18.887 net/cnxk: not in enabled drivers build config 00:03:18.887 net/cpfl: not in enabled drivers build config 00:03:18.888 net/cxgbe: not in enabled drivers build config 00:03:18.888 net/dpaa: not in enabled drivers build config 00:03:18.888 net/dpaa2: not in enabled drivers build config 00:03:18.888 net/e1000: not in enabled drivers build config 00:03:18.888 net/ena: not in enabled drivers build config 00:03:18.888 net/enetc: not in enabled drivers build config 00:03:18.888 net/enetfec: not in enabled drivers build config 00:03:18.888 net/enic: not in enabled drivers build config 00:03:18.888 net/failsafe: not in enabled drivers build config 00:03:18.888 net/fm10k: not in enabled drivers build config 00:03:18.888 net/gve: not in enabled drivers build config 00:03:18.888 net/hinic: not in enabled drivers build config 00:03:18.888 net/hns3: not in enabled drivers build config 00:03:18.888 net/i40e: not in enabled drivers build config 00:03:18.888 net/iavf: not in enabled drivers build config 00:03:18.888 net/ice: not in enabled drivers build config 00:03:18.888 net/idpf: not in enabled drivers build config 00:03:18.888 net/igc: not in enabled drivers build config 00:03:18.888 net/ionic: not in enabled drivers build config 00:03:18.888 net/ipn3ke: not in enabled drivers build config 00:03:18.888 net/ixgbe: not in enabled drivers build config 00:03:18.888 net/mana: not in enabled drivers build config 00:03:18.888 net/memif: not in enabled drivers build config 00:03:18.888 net/mlx4: not in enabled drivers build config 00:03:18.888 net/mlx5: not in enabled drivers build config 00:03:18.888 net/mvneta: not in enabled drivers build config 00:03:18.888 net/mvpp2: not in enabled drivers build config 00:03:18.888 net/netvsc: not in enabled drivers build config 00:03:18.888 net/nfb: not in enabled drivers build config 00:03:18.888 net/nfp: not in enabled drivers build config 00:03:18.888 net/ngbe: not in enabled drivers build config 00:03:18.888 net/null: not in enabled drivers build config 00:03:18.888 net/octeontx: not in enabled drivers build config 00:03:18.888 net/octeon_ep: not in enabled drivers build config 00:03:18.888 net/pcap: not in enabled drivers build config 00:03:18.888 net/pfe: not in enabled drivers build config 00:03:18.888 net/qede: not in enabled drivers build config 00:03:18.888 net/ring: not in enabled drivers build config 00:03:18.888 net/sfc: not in enabled drivers build config 00:03:18.888 net/softnic: not in enabled drivers build config 00:03:18.888 net/tap: not in enabled drivers build config 00:03:18.888 net/thunderx: not in enabled drivers build config 00:03:18.888 net/txgbe: not in enabled drivers build config 00:03:18.888 net/vdev_netvsc: not in enabled drivers build config 00:03:18.888 net/vhost: not in enabled drivers build config 00:03:18.888 net/virtio: not in enabled drivers build config 00:03:18.888 net/vmxnet3: not in enabled drivers build config 00:03:18.888 raw/*: missing internal dependency, "rawdev" 00:03:18.888 crypto/armv8: not in enabled drivers build config 00:03:18.888 crypto/bcmfs: not in enabled drivers build config 00:03:18.888 crypto/caam_jr: not in enabled drivers build config 00:03:18.888 crypto/ccp: not in enabled drivers build config 00:03:18.888 crypto/cnxk: not in enabled drivers build config 00:03:18.888 crypto/dpaa_sec: not in enabled drivers build config 00:03:18.888 crypto/dpaa2_sec: not in enabled drivers build config 00:03:18.888 crypto/ipsec_mb: not in enabled drivers build config 00:03:18.888 crypto/mlx5: not in enabled drivers build config 00:03:18.888 crypto/mvsam: not in enabled drivers build config 00:03:18.888 crypto/nitrox: not in enabled drivers build config 00:03:18.888 crypto/null: not in enabled drivers build config 00:03:18.888 crypto/octeontx: not in enabled drivers build config 00:03:18.888 crypto/openssl: not in enabled drivers build config 00:03:18.888 crypto/scheduler: not in enabled drivers build config 00:03:18.888 crypto/uadk: not in enabled drivers build config 00:03:18.888 crypto/virtio: not in enabled drivers build config 00:03:18.888 compress/isal: not in enabled drivers build config 00:03:18.888 compress/mlx5: not in enabled drivers build config 00:03:18.888 compress/nitrox: not in enabled drivers build config 00:03:18.888 compress/octeontx: not in enabled drivers build config 00:03:18.888 compress/zlib: not in enabled drivers build config 00:03:18.888 regex/*: missing internal dependency, "regexdev" 00:03:18.888 ml/*: missing internal dependency, "mldev" 00:03:18.888 vdpa/ifc: not in enabled drivers build config 00:03:18.888 vdpa/mlx5: not in enabled drivers build config 00:03:18.888 vdpa/nfp: not in enabled drivers build config 00:03:18.888 vdpa/sfc: not in enabled drivers build config 00:03:18.888 event/*: missing internal dependency, "eventdev" 00:03:18.888 baseband/*: missing internal dependency, "bbdev" 00:03:18.888 gpu/*: missing internal dependency, "gpudev" 00:03:18.888 00:03:18.888 00:03:18.888 Build targets in project: 85 00:03:18.888 00:03:18.888 DPDK 24.03.0 00:03:18.888 00:03:18.888 User defined options 00:03:18.888 buildtype : debug 00:03:18.888 default_library : shared 00:03:18.888 libdir : lib 00:03:18.888 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:18.888 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:18.888 c_link_args : 00:03:18.888 cpu_instruction_set: native 00:03:18.888 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:03:18.888 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:03:18.888 enable_docs : false 00:03:18.888 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:18.888 enable_kmods : false 00:03:18.888 max_lcores : 128 00:03:18.888 tests : false 00:03:18.888 00:03:18.888 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:19.460 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:19.460 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:19.460 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:19.460 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:19.460 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:19.460 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:19.460 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:19.460 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:19.460 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:19.460 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:19.460 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:19.460 [11/268] Linking static target lib/librte_kvargs.a 00:03:19.460 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:19.460 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:19.460 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:19.460 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:19.460 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:19.460 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:19.460 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:19.717 [19/268] Linking static target lib/librte_log.a 00:03:19.717 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:19.717 [21/268] Linking static target lib/librte_pci.a 00:03:19.717 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:19.717 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:19.717 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:19.717 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:19.717 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:19.978 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:19.978 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:19.978 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:19.978 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:19.978 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:19.978 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:19.978 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:19.978 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:19.978 [35/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:19.978 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:19.978 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:19.978 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:19.978 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:19.978 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:19.978 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:19.978 [42/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:19.978 [43/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:19.978 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:19.978 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:19.978 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:19.978 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:19.978 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:19.978 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:19.978 [50/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:19.978 [51/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:19.978 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:19.978 [53/268] Linking static target lib/librte_meter.a 00:03:19.978 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:19.978 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:19.978 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:19.978 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:19.978 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:19.978 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:19.978 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:19.978 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:19.978 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:19.978 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:19.978 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:19.978 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:19.978 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:19.978 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:19.978 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:19.978 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:19.978 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:19.978 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:19.978 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:19.978 [73/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:19.978 [74/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:19.978 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:19.978 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:19.978 [77/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:19.978 [78/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:19.978 [79/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:19.978 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:19.978 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:19.978 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:19.978 [83/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:19.978 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:19.978 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:19.978 [86/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.978 [87/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:19.978 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:19.978 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:19.978 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:19.978 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:19.978 [92/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:19.978 [93/268] Linking static target lib/librte_ring.a 00:03:20.238 [94/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:20.238 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:20.238 [96/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.238 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:20.238 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:20.238 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:20.238 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:20.238 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:20.238 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:20.238 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:20.238 [104/268] Linking static target lib/librte_telemetry.a 00:03:20.238 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:20.238 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:20.238 [107/268] Linking static target lib/librte_net.a 00:03:20.238 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:20.238 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:20.238 [110/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:20.238 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:20.238 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:20.238 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:20.238 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:20.238 [115/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:20.238 [116/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:20.238 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:20.238 [118/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:20.238 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:20.238 [120/268] Linking static target lib/librte_mempool.a 00:03:20.238 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:20.238 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:20.238 [123/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:20.238 [124/268] Linking static target lib/librte_rcu.a 00:03:20.238 [125/268] Linking static target lib/librte_eal.a 00:03:20.238 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:20.238 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:20.238 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:20.238 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:20.238 [130/268] Linking static target lib/librte_cmdline.a 00:03:20.238 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:20.238 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:20.238 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.238 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:20.238 [135/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.238 [136/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:20.238 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:20.238 [138/268] Linking target lib/librte_log.so.24.1 00:03:20.238 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:20.238 [140/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.238 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:20.238 [142/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:20.496 [143/268] Linking static target lib/librte_mbuf.a 00:03:20.496 [144/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:20.496 [145/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:20.496 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:20.496 [147/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.496 [148/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:20.496 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:20.496 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:20.496 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:20.496 [152/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:20.496 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:20.496 [154/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:20.496 [155/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:20.496 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:20.496 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:20.496 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:20.496 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:20.496 [160/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.496 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:20.496 [162/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.496 [163/268] Linking static target lib/librte_compressdev.a 00:03:20.496 [164/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:20.496 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:20.496 [166/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:20.496 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:20.496 [168/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.496 [169/268] Linking static target lib/librte_timer.a 00:03:20.496 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:20.496 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:20.496 [172/268] Linking target lib/librte_kvargs.so.24.1 00:03:20.496 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:20.496 [174/268] Linking target lib/librte_telemetry.so.24.1 00:03:20.496 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:20.496 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:20.496 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:20.496 [178/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:20.496 [179/268] Linking static target lib/librte_power.a 00:03:20.496 [180/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:20.496 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:20.496 [182/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:20.496 [183/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:20.496 [184/268] Linking static target lib/librte_dmadev.a 00:03:20.496 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:20.755 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:20.755 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:20.755 [188/268] Linking static target lib/librte_reorder.a 00:03:20.755 [189/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.755 [190/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:20.755 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:20.755 [192/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:20.755 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:20.755 [194/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.755 [195/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.755 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:20.755 [197/268] Linking static target drivers/librte_bus_vdev.a 00:03:20.755 [198/268] Linking static target lib/librte_security.a 00:03:20.755 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:20.755 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:20.755 [201/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:20.755 [202/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.755 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:20.755 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.755 [205/268] Linking static target lib/librte_hash.a 00:03:20.755 [206/268] Linking static target drivers/librte_mempool_ring.a 00:03:20.755 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.755 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.755 [209/268] Linking static target drivers/librte_bus_pci.a 00:03:20.755 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:20.755 [211/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:20.755 [212/268] Linking static target lib/librte_cryptodev.a 00:03:21.013 [213/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.013 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.013 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.013 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.013 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.272 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:21.272 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.272 [220/268] Linking static target lib/librte_ethdev.a 00:03:21.272 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.272 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.272 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.532 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.532 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:21.532 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.791 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.360 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:22.360 [229/268] Linking static target lib/librte_vhost.a 00:03:22.620 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.524 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.723 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.101 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.101 [234/268] Linking target lib/librte_eal.so.24.1 00:03:30.101 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:30.101 [236/268] Linking target lib/librte_ring.so.24.1 00:03:30.101 [237/268] Linking target lib/librte_pci.so.24.1 00:03:30.101 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:30.101 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:30.101 [240/268] Linking target lib/librte_timer.so.24.1 00:03:30.101 [241/268] Linking target lib/librte_meter.so.24.1 00:03:30.101 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:30.101 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:30.101 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:30.101 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:30.101 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:30.101 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:30.101 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:30.101 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:30.360 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:30.360 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:30.360 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:30.360 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:30.360 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:30.620 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:30.620 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:30.620 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:30.620 [258/268] Linking target lib/librte_net.so.24.1 00:03:30.620 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:30.620 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:30.620 [261/268] Linking target lib/librte_hash.so.24.1 00:03:30.620 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:30.620 [263/268] Linking target lib/librte_security.so.24.1 00:03:30.620 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:30.880 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:30.880 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:30.880 [267/268] Linking target lib/librte_power.so.24.1 00:03:30.880 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:30.880 INFO: autodetecting backend as ninja 00:03:30.880 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:43.094 CC lib/log/log.o 00:03:43.094 CC lib/log/log_flags.o 00:03:43.094 CC lib/log/log_deprecated.o 00:03:43.094 CC lib/ut_mock/mock.o 00:03:43.094 CC lib/ut/ut.o 00:03:43.094 LIB libspdk_ut_mock.a 00:03:43.094 LIB libspdk_log.a 00:03:43.094 SO libspdk_ut_mock.so.6.0 00:03:43.094 SO libspdk_log.so.7.1 00:03:43.094 LIB libspdk_ut.a 00:03:43.094 SYMLINK libspdk_ut_mock.so 00:03:43.094 SO libspdk_ut.so.2.0 00:03:43.094 SYMLINK libspdk_log.so 00:03:43.094 SYMLINK libspdk_ut.so 00:03:43.094 CC lib/dma/dma.o 00:03:43.094 CC lib/ioat/ioat.o 00:03:43.094 CC lib/util/base64.o 00:03:43.094 CC lib/util/bit_array.o 00:03:43.094 CC lib/util/crc16.o 00:03:43.094 CC lib/util/crc32.o 00:03:43.094 CC lib/util/cpuset.o 00:03:43.094 CC lib/util/crc64.o 00:03:43.094 CC lib/util/crc32c.o 00:03:43.094 CC lib/util/crc32_ieee.o 00:03:43.094 CC lib/util/fd.o 00:03:43.094 CC lib/util/dif.o 00:03:43.094 CC lib/util/fd_group.o 00:03:43.094 CXX lib/trace_parser/trace.o 00:03:43.094 CC lib/util/file.o 00:03:43.094 CC lib/util/hexlify.o 00:03:43.094 CC lib/util/net.o 00:03:43.094 CC lib/util/iov.o 00:03:43.094 CC lib/util/math.o 00:03:43.094 CC lib/util/pipe.o 00:03:43.094 CC lib/util/strerror_tls.o 00:03:43.094 CC lib/util/string.o 00:03:43.094 CC lib/util/uuid.o 00:03:43.094 CC lib/util/xor.o 00:03:43.094 CC lib/util/zipf.o 00:03:43.094 CC lib/util/md5.o 00:03:43.094 CC lib/vfio_user/host/vfio_user_pci.o 00:03:43.094 CC lib/vfio_user/host/vfio_user.o 00:03:43.094 LIB libspdk_dma.a 00:03:43.094 SO libspdk_dma.so.5.0 00:03:43.094 SYMLINK libspdk_dma.so 00:03:43.094 LIB libspdk_ioat.a 00:03:43.094 SO libspdk_ioat.so.7.0 00:03:43.094 LIB libspdk_vfio_user.a 00:03:43.094 SYMLINK libspdk_ioat.so 00:03:43.094 SO libspdk_vfio_user.so.5.0 00:03:43.094 SYMLINK libspdk_vfio_user.so 00:03:43.094 LIB libspdk_util.a 00:03:43.094 SO libspdk_util.so.10.1 00:03:43.094 SYMLINK libspdk_util.so 00:03:43.094 LIB libspdk_trace_parser.a 00:03:43.094 SO libspdk_trace_parser.so.6.0 00:03:43.094 SYMLINK libspdk_trace_parser.so 00:03:43.094 CC lib/conf/conf.o 00:03:43.094 CC lib/idxd/idxd.o 00:03:43.094 CC lib/idxd/idxd_user.o 00:03:43.094 CC lib/idxd/idxd_kernel.o 00:03:43.094 CC lib/env_dpdk/memory.o 00:03:43.094 CC lib/env_dpdk/env.o 00:03:43.094 CC lib/env_dpdk/init.o 00:03:43.094 CC lib/env_dpdk/pci.o 00:03:43.094 CC lib/json/json_parse.o 00:03:43.094 CC lib/env_dpdk/threads.o 00:03:43.094 CC lib/json/json_util.o 00:03:43.094 CC lib/env_dpdk/pci_ioat.o 00:03:43.094 CC lib/json/json_write.o 00:03:43.094 CC lib/env_dpdk/pci_virtio.o 00:03:43.094 CC lib/rdma_utils/rdma_utils.o 00:03:43.094 CC lib/env_dpdk/pci_vmd.o 00:03:43.094 CC lib/env_dpdk/pci_idxd.o 00:03:43.094 CC lib/vmd/vmd.o 00:03:43.094 CC lib/env_dpdk/pci_event.o 00:03:43.094 CC lib/vmd/led.o 00:03:43.094 CC lib/env_dpdk/sigbus_handler.o 00:03:43.094 CC lib/env_dpdk/pci_dpdk.o 00:03:43.094 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:43.094 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:43.094 LIB libspdk_conf.a 00:03:43.094 SO libspdk_conf.so.6.0 00:03:43.094 LIB libspdk_json.a 00:03:43.094 LIB libspdk_rdma_utils.a 00:03:43.094 SYMLINK libspdk_conf.so 00:03:43.094 SO libspdk_rdma_utils.so.1.0 00:03:43.094 SO libspdk_json.so.6.0 00:03:43.094 SYMLINK libspdk_rdma_utils.so 00:03:43.094 SYMLINK libspdk_json.so 00:03:43.094 LIB libspdk_idxd.a 00:03:43.094 SO libspdk_idxd.so.12.1 00:03:43.353 LIB libspdk_vmd.a 00:03:43.353 SYMLINK libspdk_idxd.so 00:03:43.353 SO libspdk_vmd.so.6.0 00:03:43.353 CC lib/rdma_provider/common.o 00:03:43.353 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:43.353 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:43.353 CC lib/jsonrpc/jsonrpc_server.o 00:03:43.353 SYMLINK libspdk_vmd.so 00:03:43.353 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:43.353 CC lib/jsonrpc/jsonrpc_client.o 00:03:43.612 LIB libspdk_rdma_provider.a 00:03:43.612 LIB libspdk_jsonrpc.a 00:03:43.612 SO libspdk_rdma_provider.so.7.0 00:03:43.612 SO libspdk_jsonrpc.so.6.0 00:03:43.612 SYMLINK libspdk_rdma_provider.so 00:03:43.612 SYMLINK libspdk_jsonrpc.so 00:03:43.872 LIB libspdk_env_dpdk.a 00:03:43.872 SO libspdk_env_dpdk.so.15.1 00:03:43.872 SYMLINK libspdk_env_dpdk.so 00:03:43.872 CC lib/rpc/rpc.o 00:03:44.131 LIB libspdk_rpc.a 00:03:44.131 SO libspdk_rpc.so.6.0 00:03:44.131 SYMLINK libspdk_rpc.so 00:03:44.699 CC lib/trace/trace_flags.o 00:03:44.699 CC lib/trace/trace.o 00:03:44.699 CC lib/notify/notify.o 00:03:44.699 CC lib/notify/notify_rpc.o 00:03:44.699 CC lib/trace/trace_rpc.o 00:03:44.699 CC lib/keyring/keyring.o 00:03:44.699 CC lib/keyring/keyring_rpc.o 00:03:44.699 LIB libspdk_notify.a 00:03:44.699 SO libspdk_notify.so.6.0 00:03:44.699 LIB libspdk_keyring.a 00:03:44.699 LIB libspdk_trace.a 00:03:44.699 SYMLINK libspdk_notify.so 00:03:44.699 SO libspdk_keyring.so.2.0 00:03:44.699 SO libspdk_trace.so.11.0 00:03:44.958 SYMLINK libspdk_keyring.so 00:03:44.958 SYMLINK libspdk_trace.so 00:03:45.218 CC lib/thread/thread.o 00:03:45.218 CC lib/thread/iobuf.o 00:03:45.218 CC lib/sock/sock.o 00:03:45.218 CC lib/sock/sock_rpc.o 00:03:45.477 LIB libspdk_sock.a 00:03:45.477 SO libspdk_sock.so.10.0 00:03:45.477 SYMLINK libspdk_sock.so 00:03:46.045 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:46.045 CC lib/nvme/nvme_ctrlr.o 00:03:46.045 CC lib/nvme/nvme_fabric.o 00:03:46.045 CC lib/nvme/nvme_ns.o 00:03:46.045 CC lib/nvme/nvme_ns_cmd.o 00:03:46.045 CC lib/nvme/nvme_pcie.o 00:03:46.045 CC lib/nvme/nvme_pcie_common.o 00:03:46.045 CC lib/nvme/nvme_qpair.o 00:03:46.045 CC lib/nvme/nvme.o 00:03:46.045 CC lib/nvme/nvme_quirks.o 00:03:46.045 CC lib/nvme/nvme_transport.o 00:03:46.045 CC lib/nvme/nvme_discovery.o 00:03:46.045 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:46.045 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:46.045 CC lib/nvme/nvme_tcp.o 00:03:46.045 CC lib/nvme/nvme_opal.o 00:03:46.045 CC lib/nvme/nvme_io_msg.o 00:03:46.045 CC lib/nvme/nvme_poll_group.o 00:03:46.045 CC lib/nvme/nvme_zns.o 00:03:46.045 CC lib/nvme/nvme_stubs.o 00:03:46.045 CC lib/nvme/nvme_auth.o 00:03:46.045 CC lib/nvme/nvme_cuse.o 00:03:46.045 CC lib/nvme/nvme_vfio_user.o 00:03:46.045 CC lib/nvme/nvme_rdma.o 00:03:46.303 LIB libspdk_thread.a 00:03:46.303 SO libspdk_thread.so.11.0 00:03:46.303 SYMLINK libspdk_thread.so 00:03:46.562 CC lib/accel/accel.o 00:03:46.562 CC lib/accel/accel_rpc.o 00:03:46.562 CC lib/blob/request.o 00:03:46.562 CC lib/accel/accel_sw.o 00:03:46.562 CC lib/blob/blobstore.o 00:03:46.562 CC lib/blob/blob_bs_dev.o 00:03:46.562 CC lib/blob/zeroes.o 00:03:46.562 CC lib/vfu_tgt/tgt_endpoint.o 00:03:46.562 CC lib/vfu_tgt/tgt_rpc.o 00:03:46.562 CC lib/fsdev/fsdev.o 00:03:46.562 CC lib/fsdev/fsdev_io.o 00:03:46.562 CC lib/fsdev/fsdev_rpc.o 00:03:46.562 CC lib/virtio/virtio_vfio_user.o 00:03:46.562 CC lib/virtio/virtio.o 00:03:46.562 CC lib/virtio/virtio_vhost_user.o 00:03:46.562 CC lib/virtio/virtio_pci.o 00:03:46.562 CC lib/init/json_config.o 00:03:46.562 CC lib/init/subsystem.o 00:03:46.562 CC lib/init/subsystem_rpc.o 00:03:46.562 CC lib/init/rpc.o 00:03:46.819 LIB libspdk_init.a 00:03:46.819 LIB libspdk_vfu_tgt.a 00:03:46.819 SO libspdk_init.so.6.0 00:03:46.819 LIB libspdk_virtio.a 00:03:46.819 SO libspdk_vfu_tgt.so.3.0 00:03:46.819 SYMLINK libspdk_init.so 00:03:47.078 SO libspdk_virtio.so.7.0 00:03:47.078 SYMLINK libspdk_vfu_tgt.so 00:03:47.078 SYMLINK libspdk_virtio.so 00:03:47.078 LIB libspdk_fsdev.a 00:03:47.078 SO libspdk_fsdev.so.2.0 00:03:47.375 SYMLINK libspdk_fsdev.so 00:03:47.375 CC lib/event/app.o 00:03:47.375 CC lib/event/reactor.o 00:03:47.375 CC lib/event/log_rpc.o 00:03:47.375 CC lib/event/app_rpc.o 00:03:47.375 CC lib/event/scheduler_static.o 00:03:47.375 LIB libspdk_accel.a 00:03:47.375 SO libspdk_accel.so.16.0 00:03:47.633 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:47.633 SYMLINK libspdk_accel.so 00:03:47.633 LIB libspdk_event.a 00:03:47.633 LIB libspdk_nvme.a 00:03:47.633 SO libspdk_event.so.14.0 00:03:47.633 SYMLINK libspdk_event.so 00:03:47.633 SO libspdk_nvme.so.15.0 00:03:47.892 CC lib/bdev/bdev.o 00:03:47.892 CC lib/bdev/bdev_rpc.o 00:03:47.892 CC lib/bdev/bdev_zone.o 00:03:47.892 CC lib/bdev/part.o 00:03:47.892 CC lib/bdev/scsi_nvme.o 00:03:47.892 SYMLINK libspdk_nvme.so 00:03:47.892 LIB libspdk_fuse_dispatcher.a 00:03:47.892 SO libspdk_fuse_dispatcher.so.1.0 00:03:48.150 SYMLINK libspdk_fuse_dispatcher.so 00:03:48.717 LIB libspdk_blob.a 00:03:48.717 SO libspdk_blob.so.12.0 00:03:48.975 SYMLINK libspdk_blob.so 00:03:49.233 CC lib/blobfs/blobfs.o 00:03:49.233 CC lib/blobfs/tree.o 00:03:49.233 CC lib/lvol/lvol.o 00:03:49.801 LIB libspdk_bdev.a 00:03:49.801 SO libspdk_bdev.so.17.0 00:03:49.801 LIB libspdk_blobfs.a 00:03:49.801 SO libspdk_blobfs.so.11.0 00:03:49.801 SYMLINK libspdk_bdev.so 00:03:49.801 LIB libspdk_lvol.a 00:03:49.801 SO libspdk_lvol.so.11.0 00:03:49.801 SYMLINK libspdk_blobfs.so 00:03:50.060 SYMLINK libspdk_lvol.so 00:03:50.060 CC lib/nvmf/ctrlr_discovery.o 00:03:50.060 CC lib/nvmf/ctrlr.o 00:03:50.060 CC lib/nvmf/ctrlr_bdev.o 00:03:50.060 CC lib/nvmf/subsystem.o 00:03:50.060 CC lib/nvmf/nvmf.o 00:03:50.060 CC lib/nvmf/tcp.o 00:03:50.060 CC lib/nvmf/transport.o 00:03:50.060 CC lib/nvmf/nvmf_rpc.o 00:03:50.060 CC lib/nvmf/stubs.o 00:03:50.060 CC lib/nvmf/mdns_server.o 00:03:50.060 CC lib/ftl/ftl_core.o 00:03:50.060 CC lib/nvmf/vfio_user.o 00:03:50.060 CC lib/ftl/ftl_init.o 00:03:50.060 CC lib/nvmf/auth.o 00:03:50.060 CC lib/ftl/ftl_layout.o 00:03:50.060 CC lib/nvmf/rdma.o 00:03:50.060 CC lib/ftl/ftl_debug.o 00:03:50.060 CC lib/ftl/ftl_io.o 00:03:50.060 CC lib/ftl/ftl_sb.o 00:03:50.060 CC lib/ftl/ftl_l2p.o 00:03:50.060 CC lib/ftl/ftl_band.o 00:03:50.060 CC lib/ftl/ftl_nv_cache.o 00:03:50.060 CC lib/ftl/ftl_l2p_flat.o 00:03:50.060 CC lib/ftl/ftl_rq.o 00:03:50.060 CC lib/ftl/ftl_band_ops.o 00:03:50.060 CC lib/ftl/ftl_writer.o 00:03:50.060 CC lib/ftl/ftl_reloc.o 00:03:50.061 CC lib/ftl/ftl_l2p_cache.o 00:03:50.061 CC lib/ftl/ftl_p2l_log.o 00:03:50.061 CC lib/ftl/ftl_p2l.o 00:03:50.061 CC lib/scsi/dev.o 00:03:50.061 CC lib/scsi/lun.o 00:03:50.061 CC lib/ublk/ublk.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt.o 00:03:50.061 CC lib/ublk/ublk_rpc.o 00:03:50.061 CC lib/scsi/port.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:50.061 CC lib/scsi/scsi.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:50.061 CC lib/scsi/scsi_bdev.o 00:03:50.061 CC lib/nbd/nbd.o 00:03:50.061 CC lib/scsi/scsi_pr.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:50.061 CC lib/nbd/nbd_rpc.o 00:03:50.061 CC lib/scsi/scsi_rpc.o 00:03:50.061 CC lib/scsi/task.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:50.061 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:50.061 CC lib/ftl/utils/ftl_md.o 00:03:50.319 CC lib/ftl/utils/ftl_conf.o 00:03:50.319 CC lib/ftl/utils/ftl_bitmap.o 00:03:50.319 CC lib/ftl/utils/ftl_mempool.o 00:03:50.319 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:50.319 CC lib/ftl/utils/ftl_property.o 00:03:50.319 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:50.319 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:50.319 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:50.319 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:50.319 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:50.319 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:50.319 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:50.319 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:50.319 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:50.319 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:50.319 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:50.319 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:50.319 CC lib/ftl/base/ftl_base_dev.o 00:03:50.319 CC lib/ftl/base/ftl_base_bdev.o 00:03:50.319 CC lib/ftl/ftl_trace.o 00:03:50.885 LIB libspdk_nbd.a 00:03:50.885 SO libspdk_nbd.so.7.0 00:03:50.885 LIB libspdk_scsi.a 00:03:50.885 SO libspdk_scsi.so.9.0 00:03:50.885 SYMLINK libspdk_nbd.so 00:03:50.885 SYMLINK libspdk_scsi.so 00:03:50.885 LIB libspdk_ublk.a 00:03:50.885 SO libspdk_ublk.so.3.0 00:03:51.144 SYMLINK libspdk_ublk.so 00:03:51.144 LIB libspdk_ftl.a 00:03:51.144 CC lib/iscsi/conn.o 00:03:51.144 CC lib/iscsi/init_grp.o 00:03:51.144 CC lib/iscsi/iscsi.o 00:03:51.144 CC lib/iscsi/param.o 00:03:51.144 CC lib/vhost/vhost.o 00:03:51.144 CC lib/iscsi/portal_grp.o 00:03:51.144 CC lib/iscsi/iscsi_subsystem.o 00:03:51.144 CC lib/vhost/vhost_rpc.o 00:03:51.144 CC lib/iscsi/tgt_node.o 00:03:51.144 CC lib/vhost/vhost_scsi.o 00:03:51.144 CC lib/vhost/vhost_blk.o 00:03:51.144 CC lib/iscsi/iscsi_rpc.o 00:03:51.144 CC lib/iscsi/task.o 00:03:51.144 CC lib/vhost/rte_vhost_user.o 00:03:51.144 SO libspdk_ftl.so.9.0 00:03:51.404 SYMLINK libspdk_ftl.so 00:03:51.973 LIB libspdk_nvmf.a 00:03:51.973 SO libspdk_nvmf.so.20.0 00:03:51.973 LIB libspdk_vhost.a 00:03:51.973 SO libspdk_vhost.so.8.0 00:03:51.973 SYMLINK libspdk_nvmf.so 00:03:52.233 SYMLINK libspdk_vhost.so 00:03:52.233 LIB libspdk_iscsi.a 00:03:52.233 SO libspdk_iscsi.so.8.0 00:03:52.492 SYMLINK libspdk_iscsi.so 00:03:52.751 CC module/vfu_device/vfu_virtio_blk.o 00:03:52.751 CC module/vfu_device/vfu_virtio.o 00:03:52.751 CC module/vfu_device/vfu_virtio_scsi.o 00:03:52.751 CC module/vfu_device/vfu_virtio_rpc.o 00:03:52.751 CC module/env_dpdk/env_dpdk_rpc.o 00:03:52.751 CC module/vfu_device/vfu_virtio_fs.o 00:03:53.008 CC module/accel/iaa/accel_iaa.o 00:03:53.008 CC module/accel/iaa/accel_iaa_rpc.o 00:03:53.008 CC module/blob/bdev/blob_bdev.o 00:03:53.008 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:53.008 CC module/sock/posix/posix.o 00:03:53.008 CC module/accel/error/accel_error.o 00:03:53.008 CC module/accel/error/accel_error_rpc.o 00:03:53.008 CC module/scheduler/gscheduler/gscheduler.o 00:03:53.008 CC module/keyring/file/keyring.o 00:03:53.008 CC module/keyring/file/keyring_rpc.o 00:03:53.008 CC module/keyring/linux/keyring.o 00:03:53.008 CC module/keyring/linux/keyring_rpc.o 00:03:53.008 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:53.008 CC module/fsdev/aio/fsdev_aio.o 00:03:53.008 CC module/accel/dsa/accel_dsa.o 00:03:53.008 CC module/fsdev/aio/linux_aio_mgr.o 00:03:53.008 CC module/accel/dsa/accel_dsa_rpc.o 00:03:53.008 CC module/accel/ioat/accel_ioat.o 00:03:53.008 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:53.008 CC module/accel/ioat/accel_ioat_rpc.o 00:03:53.008 LIB libspdk_env_dpdk_rpc.a 00:03:53.008 SO libspdk_env_dpdk_rpc.so.6.0 00:03:53.008 SYMLINK libspdk_env_dpdk_rpc.so 00:03:53.008 LIB libspdk_scheduler_gscheduler.a 00:03:53.008 LIB libspdk_keyring_file.a 00:03:53.008 LIB libspdk_accel_iaa.a 00:03:53.008 LIB libspdk_keyring_linux.a 00:03:53.008 LIB libspdk_scheduler_dynamic.a 00:03:53.267 SO libspdk_keyring_file.so.2.0 00:03:53.267 LIB libspdk_accel_error.a 00:03:53.267 SO libspdk_scheduler_gscheduler.so.4.0 00:03:53.267 SO libspdk_accel_iaa.so.3.0 00:03:53.267 LIB libspdk_scheduler_dpdk_governor.a 00:03:53.267 SO libspdk_keyring_linux.so.1.0 00:03:53.267 SO libspdk_scheduler_dynamic.so.4.0 00:03:53.267 LIB libspdk_accel_ioat.a 00:03:53.267 SO libspdk_accel_error.so.2.0 00:03:53.267 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:53.267 LIB libspdk_blob_bdev.a 00:03:53.267 SO libspdk_accel_ioat.so.6.0 00:03:53.267 SYMLINK libspdk_keyring_file.so 00:03:53.267 SYMLINK libspdk_scheduler_gscheduler.so 00:03:53.267 SYMLINK libspdk_keyring_linux.so 00:03:53.267 SYMLINK libspdk_scheduler_dynamic.so 00:03:53.267 SO libspdk_blob_bdev.so.12.0 00:03:53.267 SYMLINK libspdk_accel_iaa.so 00:03:53.267 SYMLINK libspdk_accel_error.so 00:03:53.267 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:53.267 LIB libspdk_accel_dsa.a 00:03:53.267 SYMLINK libspdk_accel_ioat.so 00:03:53.267 SYMLINK libspdk_blob_bdev.so 00:03:53.267 SO libspdk_accel_dsa.so.5.0 00:03:53.267 LIB libspdk_vfu_device.a 00:03:53.267 SYMLINK libspdk_accel_dsa.so 00:03:53.267 SO libspdk_vfu_device.so.3.0 00:03:53.525 SYMLINK libspdk_vfu_device.so 00:03:53.526 LIB libspdk_fsdev_aio.a 00:03:53.526 LIB libspdk_sock_posix.a 00:03:53.526 SO libspdk_fsdev_aio.so.1.0 00:03:53.526 SO libspdk_sock_posix.so.6.0 00:03:53.526 SYMLINK libspdk_fsdev_aio.so 00:03:53.526 SYMLINK libspdk_sock_posix.so 00:03:53.784 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:53.784 CC module/bdev/delay/vbdev_delay.o 00:03:53.784 CC module/bdev/error/vbdev_error.o 00:03:53.784 CC module/bdev/error/vbdev_error_rpc.o 00:03:53.784 CC module/blobfs/bdev/blobfs_bdev.o 00:03:53.784 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:53.784 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:53.784 CC module/bdev/passthru/vbdev_passthru.o 00:03:53.784 CC module/bdev/gpt/vbdev_gpt.o 00:03:53.784 CC module/bdev/gpt/gpt.o 00:03:53.784 CC module/bdev/split/vbdev_split.o 00:03:53.784 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:53.784 CC module/bdev/nvme/bdev_nvme.o 00:03:53.784 CC module/bdev/split/vbdev_split_rpc.o 00:03:53.784 CC module/bdev/nvme/nvme_rpc.o 00:03:53.784 CC module/bdev/nvme/bdev_mdns_client.o 00:03:53.784 CC module/bdev/nvme/vbdev_opal.o 00:03:53.784 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:53.784 CC module/bdev/null/bdev_null.o 00:03:53.784 CC module/bdev/lvol/vbdev_lvol.o 00:03:53.784 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:53.784 CC module/bdev/null/bdev_null_rpc.o 00:03:53.784 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:53.784 CC module/bdev/ftl/bdev_ftl.o 00:03:53.784 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:53.784 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:53.784 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:53.784 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:53.784 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:53.784 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:53.784 CC module/bdev/raid/raid0.o 00:03:53.784 CC module/bdev/raid/bdev_raid.o 00:03:53.784 CC module/bdev/raid/bdev_raid_rpc.o 00:03:53.784 CC module/bdev/raid/bdev_raid_sb.o 00:03:53.784 CC module/bdev/raid/raid1.o 00:03:53.784 CC module/bdev/iscsi/bdev_iscsi.o 00:03:53.784 CC module/bdev/raid/concat.o 00:03:53.784 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:53.784 CC module/bdev/aio/bdev_aio.o 00:03:53.784 CC module/bdev/malloc/bdev_malloc.o 00:03:53.784 CC module/bdev/aio/bdev_aio_rpc.o 00:03:53.784 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:54.041 LIB libspdk_blobfs_bdev.a 00:03:54.041 SO libspdk_blobfs_bdev.so.6.0 00:03:54.041 LIB libspdk_bdev_split.a 00:03:54.041 LIB libspdk_bdev_error.a 00:03:54.041 SO libspdk_bdev_split.so.6.0 00:03:54.041 SYMLINK libspdk_blobfs_bdev.so 00:03:54.041 LIB libspdk_bdev_gpt.a 00:03:54.041 LIB libspdk_bdev_passthru.a 00:03:54.041 SO libspdk_bdev_error.so.6.0 00:03:54.041 LIB libspdk_bdev_null.a 00:03:54.041 SYMLINK libspdk_bdev_split.so 00:03:54.041 LIB libspdk_bdev_delay.a 00:03:54.042 SO libspdk_bdev_passthru.so.6.0 00:03:54.042 SO libspdk_bdev_gpt.so.6.0 00:03:54.042 SO libspdk_bdev_null.so.6.0 00:03:54.042 SYMLINK libspdk_bdev_error.so 00:03:54.042 LIB libspdk_bdev_ftl.a 00:03:54.042 SO libspdk_bdev_delay.so.6.0 00:03:54.042 LIB libspdk_bdev_iscsi.a 00:03:54.042 LIB libspdk_bdev_zone_block.a 00:03:54.042 SO libspdk_bdev_ftl.so.6.0 00:03:54.042 SYMLINK libspdk_bdev_passthru.so 00:03:54.042 LIB libspdk_bdev_aio.a 00:03:54.042 SYMLINK libspdk_bdev_gpt.so 00:03:54.042 SYMLINK libspdk_bdev_null.so 00:03:54.042 SO libspdk_bdev_zone_block.so.6.0 00:03:54.042 SO libspdk_bdev_iscsi.so.6.0 00:03:54.042 LIB libspdk_bdev_malloc.a 00:03:54.042 SYMLINK libspdk_bdev_delay.so 00:03:54.042 SO libspdk_bdev_aio.so.6.0 00:03:54.299 SYMLINK libspdk_bdev_ftl.so 00:03:54.299 SO libspdk_bdev_malloc.so.6.0 00:03:54.299 SYMLINK libspdk_bdev_zone_block.so 00:03:54.299 SYMLINK libspdk_bdev_iscsi.so 00:03:54.299 SYMLINK libspdk_bdev_aio.so 00:03:54.299 SYMLINK libspdk_bdev_malloc.so 00:03:54.299 LIB libspdk_bdev_lvol.a 00:03:54.299 LIB libspdk_bdev_virtio.a 00:03:54.299 SO libspdk_bdev_lvol.so.6.0 00:03:54.299 SO libspdk_bdev_virtio.so.6.0 00:03:54.299 SYMLINK libspdk_bdev_lvol.so 00:03:54.299 SYMLINK libspdk_bdev_virtio.so 00:03:54.557 LIB libspdk_bdev_raid.a 00:03:54.557 SO libspdk_bdev_raid.so.6.0 00:03:54.557 SYMLINK libspdk_bdev_raid.so 00:03:55.931 LIB libspdk_bdev_nvme.a 00:03:55.931 SO libspdk_bdev_nvme.so.7.1 00:03:55.931 SYMLINK libspdk_bdev_nvme.so 00:03:56.497 CC module/event/subsystems/vmd/vmd.o 00:03:56.497 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:56.497 CC module/event/subsystems/keyring/keyring.o 00:03:56.497 CC module/event/subsystems/sock/sock.o 00:03:56.497 CC module/event/subsystems/fsdev/fsdev.o 00:03:56.497 CC module/event/subsystems/scheduler/scheduler.o 00:03:56.497 CC module/event/subsystems/iobuf/iobuf.o 00:03:56.497 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:56.497 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:56.497 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:56.497 LIB libspdk_event_keyring.a 00:03:56.497 LIB libspdk_event_vmd.a 00:03:56.497 LIB libspdk_event_fsdev.a 00:03:56.497 LIB libspdk_event_sock.a 00:03:56.497 LIB libspdk_event_vhost_blk.a 00:03:56.497 LIB libspdk_event_scheduler.a 00:03:56.497 LIB libspdk_event_iobuf.a 00:03:56.497 SO libspdk_event_keyring.so.1.0 00:03:56.497 LIB libspdk_event_vfu_tgt.a 00:03:56.497 SO libspdk_event_scheduler.so.4.0 00:03:56.497 SO libspdk_event_vmd.so.6.0 00:03:56.497 SO libspdk_event_sock.so.5.0 00:03:56.497 SO libspdk_event_fsdev.so.1.0 00:03:56.497 SO libspdk_event_vhost_blk.so.3.0 00:03:56.497 SO libspdk_event_iobuf.so.3.0 00:03:56.497 SO libspdk_event_vfu_tgt.so.3.0 00:03:56.497 SYMLINK libspdk_event_keyring.so 00:03:56.756 SYMLINK libspdk_event_scheduler.so 00:03:56.756 SYMLINK libspdk_event_vhost_blk.so 00:03:56.756 SYMLINK libspdk_event_vmd.so 00:03:56.756 SYMLINK libspdk_event_fsdev.so 00:03:56.756 SYMLINK libspdk_event_sock.so 00:03:56.756 SYMLINK libspdk_event_vfu_tgt.so 00:03:56.756 SYMLINK libspdk_event_iobuf.so 00:03:57.014 CC module/event/subsystems/accel/accel.o 00:03:57.014 LIB libspdk_event_accel.a 00:03:57.014 SO libspdk_event_accel.so.6.0 00:03:57.271 SYMLINK libspdk_event_accel.so 00:03:57.529 CC module/event/subsystems/bdev/bdev.o 00:03:57.787 LIB libspdk_event_bdev.a 00:03:57.787 SO libspdk_event_bdev.so.6.0 00:03:57.787 SYMLINK libspdk_event_bdev.so 00:03:58.045 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:58.045 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:58.045 CC module/event/subsystems/scsi/scsi.o 00:03:58.045 CC module/event/subsystems/nbd/nbd.o 00:03:58.045 CC module/event/subsystems/ublk/ublk.o 00:03:58.303 LIB libspdk_event_scsi.a 00:03:58.303 LIB libspdk_event_nbd.a 00:03:58.303 LIB libspdk_event_ublk.a 00:03:58.303 SO libspdk_event_nbd.so.6.0 00:03:58.303 SO libspdk_event_scsi.so.6.0 00:03:58.303 LIB libspdk_event_nvmf.a 00:03:58.303 SO libspdk_event_ublk.so.3.0 00:03:58.303 SO libspdk_event_nvmf.so.6.0 00:03:58.303 SYMLINK libspdk_event_nbd.so 00:03:58.303 SYMLINK libspdk_event_scsi.so 00:03:58.303 SYMLINK libspdk_event_nvmf.so 00:03:58.303 SYMLINK libspdk_event_ublk.so 00:03:58.561 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:58.561 CC module/event/subsystems/iscsi/iscsi.o 00:03:58.818 LIB libspdk_event_vhost_scsi.a 00:03:58.818 LIB libspdk_event_iscsi.a 00:03:58.818 SO libspdk_event_vhost_scsi.so.3.0 00:03:58.818 SO libspdk_event_iscsi.so.6.0 00:03:58.818 SYMLINK libspdk_event_vhost_scsi.so 00:03:58.818 SYMLINK libspdk_event_iscsi.so 00:03:59.076 SO libspdk.so.6.0 00:03:59.076 SYMLINK libspdk.so 00:03:59.344 CC app/trace_record/trace_record.o 00:03:59.344 CC app/spdk_top/spdk_top.o 00:03:59.344 CXX app/trace/trace.o 00:03:59.344 CC test/rpc_client/rpc_client_test.o 00:03:59.344 CC app/spdk_lspci/spdk_lspci.o 00:03:59.344 CC app/spdk_nvme_discover/discovery_aer.o 00:03:59.344 CC app/spdk_nvme_identify/identify.o 00:03:59.344 CC app/spdk_nvme_perf/perf.o 00:03:59.344 TEST_HEADER include/spdk/accel_module.h 00:03:59.344 TEST_HEADER include/spdk/accel.h 00:03:59.344 TEST_HEADER include/spdk/barrier.h 00:03:59.344 TEST_HEADER include/spdk/base64.h 00:03:59.344 TEST_HEADER include/spdk/assert.h 00:03:59.344 TEST_HEADER include/spdk/bdev.h 00:03:59.344 TEST_HEADER include/spdk/bdev_zone.h 00:03:59.344 TEST_HEADER include/spdk/bdev_module.h 00:03:59.344 TEST_HEADER include/spdk/bit_array.h 00:03:59.344 TEST_HEADER include/spdk/blob_bdev.h 00:03:59.344 TEST_HEADER include/spdk/bit_pool.h 00:03:59.344 TEST_HEADER include/spdk/blobfs.h 00:03:59.344 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:59.344 TEST_HEADER include/spdk/blob.h 00:03:59.344 TEST_HEADER include/spdk/config.h 00:03:59.344 TEST_HEADER include/spdk/cpuset.h 00:03:59.344 TEST_HEADER include/spdk/conf.h 00:03:59.344 TEST_HEADER include/spdk/crc32.h 00:03:59.344 TEST_HEADER include/spdk/crc16.h 00:03:59.344 TEST_HEADER include/spdk/dif.h 00:03:59.344 TEST_HEADER include/spdk/crc64.h 00:03:59.344 TEST_HEADER include/spdk/dma.h 00:03:59.344 TEST_HEADER include/spdk/endian.h 00:03:59.344 TEST_HEADER include/spdk/event.h 00:03:59.344 TEST_HEADER include/spdk/env_dpdk.h 00:03:59.344 TEST_HEADER include/spdk/env.h 00:03:59.344 TEST_HEADER include/spdk/fd.h 00:03:59.344 TEST_HEADER include/spdk/fsdev.h 00:03:59.344 TEST_HEADER include/spdk/file.h 00:03:59.344 TEST_HEADER include/spdk/fd_group.h 00:03:59.344 TEST_HEADER include/spdk/fsdev_module.h 00:03:59.344 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:59.344 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:59.344 TEST_HEADER include/spdk/hexlify.h 00:03:59.344 TEST_HEADER include/spdk/gpt_spec.h 00:03:59.344 TEST_HEADER include/spdk/histogram_data.h 00:03:59.344 TEST_HEADER include/spdk/ftl.h 00:03:59.344 TEST_HEADER include/spdk/idxd.h 00:03:59.344 TEST_HEADER include/spdk/idxd_spec.h 00:03:59.344 TEST_HEADER include/spdk/ioat.h 00:03:59.344 TEST_HEADER include/spdk/ioat_spec.h 00:03:59.344 TEST_HEADER include/spdk/iscsi_spec.h 00:03:59.344 TEST_HEADER include/spdk/init.h 00:03:59.344 TEST_HEADER include/spdk/json.h 00:03:59.344 TEST_HEADER include/spdk/jsonrpc.h 00:03:59.344 TEST_HEADER include/spdk/keyring.h 00:03:59.344 TEST_HEADER include/spdk/keyring_module.h 00:03:59.344 TEST_HEADER include/spdk/log.h 00:03:59.344 TEST_HEADER include/spdk/likely.h 00:03:59.344 TEST_HEADER include/spdk/lvol.h 00:03:59.344 TEST_HEADER include/spdk/md5.h 00:03:59.344 TEST_HEADER include/spdk/memory.h 00:03:59.344 TEST_HEADER include/spdk/mmio.h 00:03:59.344 TEST_HEADER include/spdk/net.h 00:03:59.344 TEST_HEADER include/spdk/nbd.h 00:03:59.344 TEST_HEADER include/spdk/notify.h 00:03:59.344 TEST_HEADER include/spdk/nvme_intel.h 00:03:59.344 TEST_HEADER include/spdk/nvme.h 00:03:59.344 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:59.344 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:59.344 TEST_HEADER include/spdk/nvme_spec.h 00:03:59.344 TEST_HEADER include/spdk/nvme_zns.h 00:03:59.344 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:59.344 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:59.344 TEST_HEADER include/spdk/nvmf.h 00:03:59.344 TEST_HEADER include/spdk/nvmf_spec.h 00:03:59.344 CC app/spdk_dd/spdk_dd.o 00:03:59.344 CC app/nvmf_tgt/nvmf_main.o 00:03:59.344 TEST_HEADER include/spdk/nvmf_transport.h 00:03:59.344 TEST_HEADER include/spdk/opal.h 00:03:59.344 TEST_HEADER include/spdk/opal_spec.h 00:03:59.344 TEST_HEADER include/spdk/pipe.h 00:03:59.344 TEST_HEADER include/spdk/pci_ids.h 00:03:59.344 TEST_HEADER include/spdk/reduce.h 00:03:59.344 TEST_HEADER include/spdk/queue.h 00:03:59.344 TEST_HEADER include/spdk/rpc.h 00:03:59.344 CC app/spdk_tgt/spdk_tgt.o 00:03:59.344 TEST_HEADER include/spdk/scheduler.h 00:03:59.344 TEST_HEADER include/spdk/scsi.h 00:03:59.344 CC app/iscsi_tgt/iscsi_tgt.o 00:03:59.344 TEST_HEADER include/spdk/scsi_spec.h 00:03:59.344 TEST_HEADER include/spdk/sock.h 00:03:59.344 TEST_HEADER include/spdk/string.h 00:03:59.344 TEST_HEADER include/spdk/stdinc.h 00:03:59.344 TEST_HEADER include/spdk/thread.h 00:03:59.344 TEST_HEADER include/spdk/trace.h 00:03:59.344 TEST_HEADER include/spdk/trace_parser.h 00:03:59.344 TEST_HEADER include/spdk/tree.h 00:03:59.344 TEST_HEADER include/spdk/ublk.h 00:03:59.344 TEST_HEADER include/spdk/util.h 00:03:59.344 TEST_HEADER include/spdk/uuid.h 00:03:59.344 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:59.344 TEST_HEADER include/spdk/version.h 00:03:59.344 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:59.344 TEST_HEADER include/spdk/vhost.h 00:03:59.344 TEST_HEADER include/spdk/vmd.h 00:03:59.344 TEST_HEADER include/spdk/xor.h 00:03:59.344 CXX test/cpp_headers/accel.o 00:03:59.344 TEST_HEADER include/spdk/zipf.h 00:03:59.344 CXX test/cpp_headers/accel_module.o 00:03:59.344 CXX test/cpp_headers/assert.o 00:03:59.344 CXX test/cpp_headers/base64.o 00:03:59.344 CXX test/cpp_headers/bdev_module.o 00:03:59.344 CXX test/cpp_headers/bdev.o 00:03:59.344 CXX test/cpp_headers/barrier.o 00:03:59.344 CXX test/cpp_headers/bdev_zone.o 00:03:59.344 CXX test/cpp_headers/bit_array.o 00:03:59.344 CXX test/cpp_headers/blob_bdev.o 00:03:59.344 CXX test/cpp_headers/blobfs_bdev.o 00:03:59.344 CXX test/cpp_headers/blobfs.o 00:03:59.344 CXX test/cpp_headers/bit_pool.o 00:03:59.345 CXX test/cpp_headers/blob.o 00:03:59.345 CXX test/cpp_headers/conf.o 00:03:59.345 CXX test/cpp_headers/config.o 00:03:59.345 CXX test/cpp_headers/cpuset.o 00:03:59.345 CXX test/cpp_headers/crc64.o 00:03:59.345 CXX test/cpp_headers/dif.o 00:03:59.345 CXX test/cpp_headers/dma.o 00:03:59.345 CXX test/cpp_headers/endian.o 00:03:59.345 CXX test/cpp_headers/env_dpdk.o 00:03:59.345 CXX test/cpp_headers/crc32.o 00:03:59.345 CXX test/cpp_headers/env.o 00:03:59.345 CXX test/cpp_headers/crc16.o 00:03:59.345 CXX test/cpp_headers/event.o 00:03:59.345 CXX test/cpp_headers/fd_group.o 00:03:59.345 CXX test/cpp_headers/file.o 00:03:59.345 CXX test/cpp_headers/fsdev.o 00:03:59.345 CXX test/cpp_headers/ftl.o 00:03:59.345 CXX test/cpp_headers/fd.o 00:03:59.345 CXX test/cpp_headers/fsdev_module.o 00:03:59.345 CXX test/cpp_headers/fuse_dispatcher.o 00:03:59.345 CXX test/cpp_headers/gpt_spec.o 00:03:59.345 CXX test/cpp_headers/histogram_data.o 00:03:59.345 CXX test/cpp_headers/idxd.o 00:03:59.345 CXX test/cpp_headers/hexlify.o 00:03:59.345 CXX test/cpp_headers/idxd_spec.o 00:03:59.345 CXX test/cpp_headers/init.o 00:03:59.345 CXX test/cpp_headers/json.o 00:03:59.345 CXX test/cpp_headers/ioat_spec.o 00:03:59.345 CXX test/cpp_headers/iscsi_spec.o 00:03:59.345 CXX test/cpp_headers/ioat.o 00:03:59.345 CC app/fio/nvme/fio_plugin.o 00:03:59.345 CXX test/cpp_headers/jsonrpc.o 00:03:59.345 CXX test/cpp_headers/keyring.o 00:03:59.345 CXX test/cpp_headers/keyring_module.o 00:03:59.345 CXX test/cpp_headers/log.o 00:03:59.345 CXX test/cpp_headers/lvol.o 00:03:59.345 CXX test/cpp_headers/md5.o 00:03:59.345 CXX test/cpp_headers/likely.o 00:03:59.345 CXX test/cpp_headers/memory.o 00:03:59.345 CXX test/cpp_headers/mmio.o 00:03:59.345 CXX test/cpp_headers/nbd.o 00:03:59.345 CXX test/cpp_headers/notify.o 00:03:59.345 CXX test/cpp_headers/net.o 00:03:59.345 CXX test/cpp_headers/nvme.o 00:03:59.345 CXX test/cpp_headers/nvme_intel.o 00:03:59.345 CXX test/cpp_headers/nvme_ocssd.o 00:03:59.345 CXX test/cpp_headers/nvme_zns.o 00:03:59.345 CXX test/cpp_headers/nvme_spec.o 00:03:59.345 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:59.345 CXX test/cpp_headers/nvmf_cmd.o 00:03:59.345 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:59.345 CXX test/cpp_headers/nvmf_spec.o 00:03:59.345 CXX test/cpp_headers/nvmf.o 00:03:59.614 CXX test/cpp_headers/nvmf_transport.o 00:03:59.614 CXX test/cpp_headers/opal.o 00:03:59.614 CC app/fio/bdev/fio_plugin.o 00:03:59.614 CC examples/util/zipf/zipf.o 00:03:59.614 CC examples/ioat/verify/verify.o 00:03:59.614 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:59.614 CC test/thread/poller_perf/poller_perf.o 00:03:59.614 CC examples/ioat/perf/perf.o 00:03:59.614 CC test/dma/test_dma/test_dma.o 00:03:59.614 CC test/env/memory/memory_ut.o 00:03:59.614 CC test/app/jsoncat/jsoncat.o 00:03:59.614 CC test/env/pci/pci_ut.o 00:03:59.614 CC test/env/vtophys/vtophys.o 00:03:59.614 CC test/app/histogram_perf/histogram_perf.o 00:03:59.614 LINK spdk_lspci 00:03:59.614 CC test/app/bdev_svc/bdev_svc.o 00:03:59.614 CC test/app/stub/stub.o 00:03:59.876 LINK rpc_client_test 00:03:59.876 CC test/env/mem_callbacks/mem_callbacks.o 00:03:59.876 LINK nvmf_tgt 00:03:59.876 LINK interrupt_tgt 00:03:59.876 LINK spdk_trace_record 00:03:59.876 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:59.876 LINK spdk_tgt 00:03:59.876 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:00.134 LINK spdk_nvme_discover 00:04:00.134 LINK zipf 00:04:00.134 CXX test/cpp_headers/opal_spec.o 00:04:00.134 LINK env_dpdk_post_init 00:04:00.135 CXX test/cpp_headers/pci_ids.o 00:04:00.135 CXX test/cpp_headers/pipe.o 00:04:00.135 CXX test/cpp_headers/queue.o 00:04:00.135 CXX test/cpp_headers/reduce.o 00:04:00.135 CXX test/cpp_headers/rpc.o 00:04:00.135 CXX test/cpp_headers/scheduler.o 00:04:00.135 CXX test/cpp_headers/scsi_spec.o 00:04:00.135 CXX test/cpp_headers/sock.o 00:04:00.135 CXX test/cpp_headers/scsi.o 00:04:00.135 CXX test/cpp_headers/stdinc.o 00:04:00.135 CXX test/cpp_headers/string.o 00:04:00.135 CXX test/cpp_headers/thread.o 00:04:00.135 CXX test/cpp_headers/trace.o 00:04:00.135 CXX test/cpp_headers/trace_parser.o 00:04:00.135 CXX test/cpp_headers/tree.o 00:04:00.135 CXX test/cpp_headers/util.o 00:04:00.135 CXX test/cpp_headers/ublk.o 00:04:00.135 CXX test/cpp_headers/uuid.o 00:04:00.135 CXX test/cpp_headers/version.o 00:04:00.135 CXX test/cpp_headers/vfio_user_pci.o 00:04:00.135 CXX test/cpp_headers/vfio_user_spec.o 00:04:00.135 CXX test/cpp_headers/vhost.o 00:04:00.135 CXX test/cpp_headers/vmd.o 00:04:00.135 CXX test/cpp_headers/xor.o 00:04:00.135 CXX test/cpp_headers/zipf.o 00:04:00.135 LINK iscsi_tgt 00:04:00.135 LINK verify 00:04:00.135 LINK jsoncat 00:04:00.135 LINK poller_perf 00:04:00.135 LINK ioat_perf 00:04:00.135 LINK vtophys 00:04:00.135 LINK spdk_dd 00:04:00.135 LINK histogram_perf 00:04:00.135 LINK bdev_svc 00:04:00.135 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:00.135 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:00.393 LINK stub 00:04:00.393 LINK spdk_trace 00:04:00.393 LINK test_dma 00:04:00.393 LINK pci_ut 00:04:00.393 LINK spdk_nvme 00:04:00.393 LINK spdk_bdev 00:04:00.651 LINK nvme_fuzz 00:04:00.652 CC examples/sock/hello_world/hello_sock.o 00:04:00.652 CC examples/vmd/led/led.o 00:04:00.652 CC examples/idxd/perf/perf.o 00:04:00.652 CC examples/thread/thread/thread_ex.o 00:04:00.652 CC examples/vmd/lsvmd/lsvmd.o 00:04:00.652 LINK spdk_top 00:04:00.652 CC test/event/reactor/reactor.o 00:04:00.652 CC test/event/reactor_perf/reactor_perf.o 00:04:00.652 CC test/event/event_perf/event_perf.o 00:04:00.652 LINK vhost_fuzz 00:04:00.652 LINK spdk_nvme_identify 00:04:00.652 CC test/event/app_repeat/app_repeat.o 00:04:00.652 LINK spdk_nvme_perf 00:04:00.652 LINK mem_callbacks 00:04:00.652 CC test/event/scheduler/scheduler.o 00:04:00.652 LINK led 00:04:00.652 CC app/vhost/vhost.o 00:04:00.652 LINK lsvmd 00:04:00.912 LINK hello_sock 00:04:00.912 LINK reactor 00:04:00.912 LINK reactor_perf 00:04:00.912 LINK thread 00:04:00.912 LINK event_perf 00:04:00.912 LINK app_repeat 00:04:00.912 LINK idxd_perf 00:04:00.912 CC test/nvme/startup/startup.o 00:04:00.912 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:00.912 CC test/nvme/sgl/sgl.o 00:04:00.912 CC test/nvme/boot_partition/boot_partition.o 00:04:00.912 CC test/nvme/simple_copy/simple_copy.o 00:04:00.912 CC test/nvme/fdp/fdp.o 00:04:00.912 CC test/nvme/reserve/reserve.o 00:04:00.912 CC test/nvme/compliance/nvme_compliance.o 00:04:00.912 CC test/nvme/err_injection/err_injection.o 00:04:00.912 CC test/nvme/reset/reset.o 00:04:00.912 CC test/nvme/cuse/cuse.o 00:04:00.912 CC test/nvme/connect_stress/connect_stress.o 00:04:00.912 CC test/nvme/aer/aer.o 00:04:00.912 CC test/nvme/fused_ordering/fused_ordering.o 00:04:00.912 CC test/nvme/overhead/overhead.o 00:04:00.912 CC test/nvme/e2edp/nvme_dp.o 00:04:00.912 CC test/blobfs/mkfs/mkfs.o 00:04:00.912 CC test/accel/dif/dif.o 00:04:00.912 LINK vhost 00:04:00.912 LINK scheduler 00:04:00.912 LINK memory_ut 00:04:00.912 CC test/lvol/esnap/esnap.o 00:04:01.171 LINK startup 00:04:01.171 LINK doorbell_aers 00:04:01.171 LINK boot_partition 00:04:01.171 LINK connect_stress 00:04:01.171 LINK reserve 00:04:01.171 LINK err_injection 00:04:01.171 LINK simple_copy 00:04:01.171 LINK fused_ordering 00:04:01.171 LINK sgl 00:04:01.171 LINK reset 00:04:01.171 LINK aer 00:04:01.171 LINK nvme_dp 00:04:01.171 LINK mkfs 00:04:01.171 LINK overhead 00:04:01.171 LINK fdp 00:04:01.171 LINK nvme_compliance 00:04:01.171 CC examples/nvme/hello_world/hello_world.o 00:04:01.171 CC examples/nvme/reconnect/reconnect.o 00:04:01.171 CC examples/nvme/arbitration/arbitration.o 00:04:01.171 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:01.171 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:01.171 CC examples/nvme/hotplug/hotplug.o 00:04:01.171 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:01.171 CC examples/nvme/abort/abort.o 00:04:01.171 CC examples/accel/perf/accel_perf.o 00:04:01.430 CC examples/blob/cli/blobcli.o 00:04:01.430 CC examples/blob/hello_world/hello_blob.o 00:04:01.430 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:01.430 LINK pmr_persistence 00:04:01.430 LINK hello_world 00:04:01.430 LINK cmb_copy 00:04:01.430 LINK hotplug 00:04:01.430 LINK dif 00:04:01.430 LINK arbitration 00:04:01.430 LINK iscsi_fuzz 00:04:01.430 LINK reconnect 00:04:01.430 LINK hello_blob 00:04:01.689 LINK abort 00:04:01.689 LINK hello_fsdev 00:04:01.689 LINK nvme_manage 00:04:01.689 LINK accel_perf 00:04:01.689 LINK blobcli 00:04:01.948 LINK cuse 00:04:01.948 CC test/bdev/bdevio/bdevio.o 00:04:02.206 CC examples/bdev/hello_world/hello_bdev.o 00:04:02.206 CC examples/bdev/bdevperf/bdevperf.o 00:04:02.206 LINK bdevio 00:04:02.466 LINK hello_bdev 00:04:02.726 LINK bdevperf 00:04:03.294 CC examples/nvmf/nvmf/nvmf.o 00:04:03.553 LINK nvmf 00:04:04.490 LINK esnap 00:04:05.057 00:04:05.057 real 0m54.474s 00:04:05.057 user 7m56.371s 00:04:05.057 sys 3m34.123s 00:04:05.057 08:02:47 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:05.057 08:02:47 make -- common/autotest_common.sh@10 -- $ set +x 00:04:05.057 ************************************ 00:04:05.057 END TEST make 00:04:05.057 ************************************ 00:04:05.057 08:02:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:05.057 08:02:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:05.057 08:02:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:05.058 08:02:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.058 08:02:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:05.058 08:02:47 -- pm/common@44 -- $ pid=1088494 00:04:05.058 08:02:47 -- pm/common@50 -- $ kill -TERM 1088494 00:04:05.058 08:02:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.058 08:02:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:05.058 08:02:47 -- pm/common@44 -- $ pid=1088496 00:04:05.058 08:02:47 -- pm/common@50 -- $ kill -TERM 1088496 00:04:05.058 08:02:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.058 08:02:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:05.058 08:02:47 -- pm/common@44 -- $ pid=1088498 00:04:05.058 08:02:47 -- pm/common@50 -- $ kill -TERM 1088498 00:04:05.058 08:02:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.058 08:02:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:05.058 08:02:47 -- pm/common@44 -- $ pid=1088523 00:04:05.058 08:02:47 -- pm/common@50 -- $ sudo -E kill -TERM 1088523 00:04:05.058 08:02:47 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:05.058 08:02:47 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:05.058 08:02:47 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:05.058 08:02:47 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:05.058 08:02:47 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:05.058 08:02:47 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:05.058 08:02:47 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.058 08:02:47 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.058 08:02:47 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.058 08:02:47 -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.058 08:02:47 -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.058 08:02:47 -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.058 08:02:47 -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.058 08:02:47 -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.058 08:02:47 -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.058 08:02:47 -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.058 08:02:47 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.058 08:02:47 -- scripts/common.sh@344 -- # case "$op" in 00:04:05.058 08:02:47 -- scripts/common.sh@345 -- # : 1 00:04:05.058 08:02:47 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.058 08:02:47 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.058 08:02:47 -- scripts/common.sh@365 -- # decimal 1 00:04:05.058 08:02:47 -- scripts/common.sh@353 -- # local d=1 00:04:05.058 08:02:47 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.058 08:02:47 -- scripts/common.sh@355 -- # echo 1 00:04:05.058 08:02:47 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.058 08:02:47 -- scripts/common.sh@366 -- # decimal 2 00:04:05.058 08:02:47 -- scripts/common.sh@353 -- # local d=2 00:04:05.058 08:02:47 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.058 08:02:47 -- scripts/common.sh@355 -- # echo 2 00:04:05.058 08:02:47 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.058 08:02:47 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.058 08:02:47 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.058 08:02:47 -- scripts/common.sh@368 -- # return 0 00:04:05.058 08:02:47 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.058 08:02:47 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:05.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.058 --rc genhtml_branch_coverage=1 00:04:05.058 --rc genhtml_function_coverage=1 00:04:05.058 --rc genhtml_legend=1 00:04:05.058 --rc geninfo_all_blocks=1 00:04:05.058 --rc geninfo_unexecuted_blocks=1 00:04:05.058 00:04:05.058 ' 00:04:05.058 08:02:47 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:05.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.058 --rc genhtml_branch_coverage=1 00:04:05.058 --rc genhtml_function_coverage=1 00:04:05.058 --rc genhtml_legend=1 00:04:05.058 --rc geninfo_all_blocks=1 00:04:05.058 --rc geninfo_unexecuted_blocks=1 00:04:05.058 00:04:05.058 ' 00:04:05.058 08:02:47 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:05.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.058 --rc genhtml_branch_coverage=1 00:04:05.058 --rc genhtml_function_coverage=1 00:04:05.058 --rc genhtml_legend=1 00:04:05.058 --rc geninfo_all_blocks=1 00:04:05.058 --rc geninfo_unexecuted_blocks=1 00:04:05.058 00:04:05.058 ' 00:04:05.058 08:02:47 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:05.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.058 --rc genhtml_branch_coverage=1 00:04:05.058 --rc genhtml_function_coverage=1 00:04:05.058 --rc genhtml_legend=1 00:04:05.058 --rc geninfo_all_blocks=1 00:04:05.058 --rc geninfo_unexecuted_blocks=1 00:04:05.058 00:04:05.058 ' 00:04:05.058 08:02:47 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:05.058 08:02:47 -- nvmf/common.sh@7 -- # uname -s 00:04:05.058 08:02:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.058 08:02:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.058 08:02:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.058 08:02:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.058 08:02:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.058 08:02:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.058 08:02:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.058 08:02:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.058 08:02:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.058 08:02:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.058 08:02:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:05.058 08:02:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:05.058 08:02:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.058 08:02:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.058 08:02:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:05.058 08:02:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.058 08:02:47 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:05.058 08:02:47 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:05.058 08:02:47 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.058 08:02:47 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.058 08:02:47 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.058 08:02:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.058 08:02:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.058 08:02:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.058 08:02:47 -- paths/export.sh@5 -- # export PATH 00:04:05.058 08:02:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.059 08:02:47 -- nvmf/common.sh@51 -- # : 0 00:04:05.059 08:02:47 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:05.059 08:02:47 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:05.059 08:02:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.059 08:02:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.059 08:02:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.059 08:02:47 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:05.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:05.059 08:02:47 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:05.059 08:02:47 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:05.059 08:02:47 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:05.059 08:02:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:05.059 08:02:47 -- spdk/autotest.sh@32 -- # uname -s 00:04:05.059 08:02:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:05.059 08:02:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:05.059 08:02:47 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:05.059 08:02:47 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:05.059 08:02:47 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:05.059 08:02:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:05.059 08:02:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:05.059 08:02:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:05.059 08:02:47 -- spdk/autotest.sh@48 -- # udevadm_pid=1150910 00:04:05.059 08:02:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:05.059 08:02:47 -- pm/common@17 -- # local monitor 00:04:05.059 08:02:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.059 08:02:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:05.059 08:02:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.059 08:02:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.059 08:02:47 -- pm/common@21 -- # date +%s 00:04:05.059 08:02:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.059 08:02:47 -- pm/common@21 -- # date +%s 00:04:05.059 08:02:47 -- pm/common@25 -- # sleep 1 00:04:05.059 08:02:47 -- pm/common@21 -- # date +%s 00:04:05.059 08:02:47 -- pm/common@21 -- # date +%s 00:04:05.059 08:02:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732777367 00:04:05.059 08:02:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732777367 00:04:05.059 08:02:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732777367 00:04:05.059 08:02:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732777367 00:04:05.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732777367_collect-cpu-temp.pm.log 00:04:05.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732777367_collect-cpu-load.pm.log 00:04:05.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732777367_collect-vmstat.pm.log 00:04:05.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732777367_collect-bmc-pm.bmc.pm.log 00:04:06.255 08:02:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:06.255 08:02:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:06.255 08:02:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.255 08:02:48 -- common/autotest_common.sh@10 -- # set +x 00:04:06.255 08:02:48 -- spdk/autotest.sh@59 -- # create_test_list 00:04:06.255 08:02:48 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:06.255 08:02:48 -- common/autotest_common.sh@10 -- # set +x 00:04:06.255 08:02:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:06.255 08:02:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:06.255 08:02:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:06.255 08:02:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:06.255 08:02:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:06.255 08:02:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:06.255 08:02:48 -- common/autotest_common.sh@1457 -- # uname 00:04:06.255 08:02:48 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:06.255 08:02:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:06.255 08:02:48 -- common/autotest_common.sh@1477 -- # uname 00:04:06.255 08:02:48 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:06.255 08:02:48 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:06.255 08:02:48 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:06.255 lcov: LCOV version 1.15 00:04:06.255 08:02:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:24.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:24.349 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:29.617 08:03:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:29.617 08:03:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.617 08:03:11 -- common/autotest_common.sh@10 -- # set +x 00:04:29.617 08:03:11 -- spdk/autotest.sh@78 -- # rm -f 00:04:29.617 08:03:11 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.149 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:32.149 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:32.149 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:32.149 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:32.149 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:32.149 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:32.149 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:32.149 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:32.149 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:32.149 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:32.408 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:32.408 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:32.408 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:32.408 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:32.408 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:32.408 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:32.408 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:32.408 08:03:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:32.408 08:03:14 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:32.408 08:03:14 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:32.408 08:03:14 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:32.408 08:03:14 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:32.408 08:03:14 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:32.408 08:03:14 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:32.408 08:03:14 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:32.408 08:03:14 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:32.408 08:03:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:32.408 08:03:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:32.408 08:03:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:32.408 08:03:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:32.408 08:03:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:32.408 08:03:14 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:32.666 No valid GPT data, bailing 00:04:32.666 08:03:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:32.666 08:03:14 -- scripts/common.sh@394 -- # pt= 00:04:32.666 08:03:14 -- scripts/common.sh@395 -- # return 1 00:04:32.666 08:03:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:32.666 1+0 records in 00:04:32.666 1+0 records out 00:04:32.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463791 s, 226 MB/s 00:04:32.666 08:03:14 -- spdk/autotest.sh@105 -- # sync 00:04:32.666 08:03:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:32.666 08:03:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:32.666 08:03:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.939 08:03:19 -- spdk/autotest.sh@111 -- # uname -s 00:04:37.939 08:03:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:37.939 08:03:19 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:37.939 08:03:19 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:40.470 Hugepages 00:04:40.470 node hugesize free / total 00:04:40.470 node0 1048576kB 0 / 0 00:04:40.470 node0 2048kB 1024 / 1024 00:04:40.470 node1 1048576kB 0 / 0 00:04:40.470 node1 2048kB 1024 / 1024 00:04:40.470 00:04:40.470 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:40.470 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:40.470 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:40.470 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:40.470 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:40.470 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:40.470 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:40.470 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:40.470 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:40.470 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:40.470 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:40.470 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:40.470 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:40.470 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:40.470 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:40.470 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:40.470 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:40.470 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:40.470 08:03:22 -- spdk/autotest.sh@117 -- # uname -s 00:04:40.470 08:03:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:40.470 08:03:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:40.470 08:03:22 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.017 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:43.017 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:43.017 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:43.283 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:44.221 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:44.221 08:03:26 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:45.158 08:03:27 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:45.158 08:03:27 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:45.158 08:03:27 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:45.158 08:03:27 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:45.158 08:03:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:45.158 08:03:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:45.158 08:03:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.158 08:03:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:45.158 08:03:27 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:45.158 08:03:27 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:45.158 08:03:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:45.158 08:03:27 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.688 Waiting for block devices as requested 00:04:47.688 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:47.688 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:47.688 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:47.688 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:47.688 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:47.947 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:47.947 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:47.947 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:47.947 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:48.206 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:48.206 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:48.206 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:48.465 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:48.465 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:48.465 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:48.465 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:48.724 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:48.724 08:03:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:48.724 08:03:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:48.724 08:03:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:48.724 08:03:30 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:48.724 08:03:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:48.724 08:03:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:48.724 08:03:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:48.724 08:03:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:48.724 08:03:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:48.724 08:03:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:48.724 08:03:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:48.725 08:03:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:48.725 08:03:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:48.725 08:03:30 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:48.725 08:03:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:48.725 08:03:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:48.725 08:03:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:48.725 08:03:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:48.725 08:03:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:48.725 08:03:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:48.725 08:03:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:48.725 08:03:30 -- common/autotest_common.sh@1543 -- # continue 00:04:48.725 08:03:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:48.725 08:03:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.725 08:03:30 -- common/autotest_common.sh@10 -- # set +x 00:04:48.725 08:03:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:48.725 08:03:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.725 08:03:30 -- common/autotest_common.sh@10 -- # set +x 00:04:48.725 08:03:30 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.257 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:51.257 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:51.257 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:51.257 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:51.517 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:52.456 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:52.456 08:03:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:52.456 08:03:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.456 08:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:52.456 08:03:34 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:52.456 08:03:34 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:52.456 08:03:34 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:52.456 08:03:34 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:52.456 08:03:34 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:52.456 08:03:34 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:52.456 08:03:34 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:52.456 08:03:34 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:52.456 08:03:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:52.456 08:03:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:52.456 08:03:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.456 08:03:34 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:52.456 08:03:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:52.456 08:03:34 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:52.456 08:03:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:52.456 08:03:34 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:52.456 08:03:34 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:52.456 08:03:34 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:52.456 08:03:34 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:52.456 08:03:34 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:52.456 08:03:34 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:52.456 08:03:34 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:52.456 08:03:34 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:52.456 08:03:34 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1165358 00:04:52.456 08:03:34 -- common/autotest_common.sh@1585 -- # waitforlisten 1165358 00:04:52.456 08:03:34 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.456 08:03:34 -- common/autotest_common.sh@835 -- # '[' -z 1165358 ']' 00:04:52.456 08:03:34 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.456 08:03:34 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.456 08:03:34 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.456 08:03:34 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.716 08:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:52.716 [2024-11-28 08:03:34.772133] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:04:52.716 [2024-11-28 08:03:34.772179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165358 ] 00:04:52.716 [2024-11-28 08:03:34.836833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.716 [2024-11-28 08:03:34.877402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.974 08:03:35 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.975 08:03:35 -- common/autotest_common.sh@868 -- # return 0 00:04:52.975 08:03:35 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:52.975 08:03:35 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:52.975 08:03:35 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:56.277 nvme0n1 00:04:56.277 08:03:38 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:56.277 [2024-11-28 08:03:38.269401] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:56.277 request: 00:04:56.277 { 00:04:56.277 "nvme_ctrlr_name": "nvme0", 00:04:56.277 "password": "test", 00:04:56.277 "method": "bdev_nvme_opal_revert", 00:04:56.277 "req_id": 1 00:04:56.277 } 00:04:56.277 Got JSON-RPC error response 00:04:56.277 response: 00:04:56.277 { 00:04:56.277 "code": -32602, 00:04:56.277 "message": "Invalid parameters" 00:04:56.277 } 00:04:56.277 08:03:38 -- common/autotest_common.sh@1591 -- # true 00:04:56.277 08:03:38 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:56.277 08:03:38 -- common/autotest_common.sh@1595 -- # killprocess 1165358 00:04:56.277 08:03:38 -- common/autotest_common.sh@954 -- # '[' -z 1165358 ']' 00:04:56.277 08:03:38 -- common/autotest_common.sh@958 -- # kill -0 1165358 00:04:56.277 08:03:38 -- common/autotest_common.sh@959 -- # uname 00:04:56.277 08:03:38 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.277 08:03:38 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1165358 00:04:56.277 08:03:38 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.277 08:03:38 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.277 08:03:38 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1165358' 00:04:56.277 killing process with pid 1165358 00:04:56.277 08:03:38 -- common/autotest_common.sh@973 -- # kill 1165358 00:04:56.277 08:03:38 -- common/autotest_common.sh@978 -- # wait 1165358 00:04:58.185 08:03:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:58.186 08:03:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:58.186 08:03:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.186 08:03:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.186 08:03:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:58.186 08:03:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.186 08:03:39 -- common/autotest_common.sh@10 -- # set +x 00:04:58.186 08:03:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:58.186 08:03:39 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:58.186 08:03:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.186 08:03:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.186 08:03:39 -- common/autotest_common.sh@10 -- # set +x 00:04:58.186 ************************************ 00:04:58.186 START TEST env 00:04:58.186 ************************************ 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:58.186 * Looking for test storage... 00:04:58.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.186 08:03:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.186 08:03:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.186 08:03:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.186 08:03:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.186 08:03:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.186 08:03:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.186 08:03:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.186 08:03:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.186 08:03:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.186 08:03:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.186 08:03:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.186 08:03:40 env -- scripts/common.sh@344 -- # case "$op" in 00:04:58.186 08:03:40 env -- scripts/common.sh@345 -- # : 1 00:04:58.186 08:03:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.186 08:03:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.186 08:03:40 env -- scripts/common.sh@365 -- # decimal 1 00:04:58.186 08:03:40 env -- scripts/common.sh@353 -- # local d=1 00:04:58.186 08:03:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.186 08:03:40 env -- scripts/common.sh@355 -- # echo 1 00:04:58.186 08:03:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.186 08:03:40 env -- scripts/common.sh@366 -- # decimal 2 00:04:58.186 08:03:40 env -- scripts/common.sh@353 -- # local d=2 00:04:58.186 08:03:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.186 08:03:40 env -- scripts/common.sh@355 -- # echo 2 00:04:58.186 08:03:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.186 08:03:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.186 08:03:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.186 08:03:40 env -- scripts/common.sh@368 -- # return 0 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.186 --rc genhtml_branch_coverage=1 00:04:58.186 --rc genhtml_function_coverage=1 00:04:58.186 --rc genhtml_legend=1 00:04:58.186 --rc geninfo_all_blocks=1 00:04:58.186 --rc geninfo_unexecuted_blocks=1 00:04:58.186 00:04:58.186 ' 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.186 --rc genhtml_branch_coverage=1 00:04:58.186 --rc genhtml_function_coverage=1 00:04:58.186 --rc genhtml_legend=1 00:04:58.186 --rc geninfo_all_blocks=1 00:04:58.186 --rc geninfo_unexecuted_blocks=1 00:04:58.186 00:04:58.186 ' 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.186 --rc genhtml_branch_coverage=1 00:04:58.186 --rc genhtml_function_coverage=1 00:04:58.186 --rc genhtml_legend=1 00:04:58.186 --rc geninfo_all_blocks=1 00:04:58.186 --rc geninfo_unexecuted_blocks=1 00:04:58.186 00:04:58.186 ' 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.186 --rc genhtml_branch_coverage=1 00:04:58.186 --rc genhtml_function_coverage=1 00:04:58.186 --rc genhtml_legend=1 00:04:58.186 --rc geninfo_all_blocks=1 00:04:58.186 --rc geninfo_unexecuted_blocks=1 00:04:58.186 00:04:58.186 ' 00:04:58.186 08:03:40 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.186 08:03:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.186 08:03:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.186 ************************************ 00:04:58.186 START TEST env_memory 00:04:58.186 ************************************ 00:04:58.186 08:03:40 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:58.186 00:04:58.186 00:04:58.186 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.186 http://cunit.sourceforge.net/ 00:04:58.186 00:04:58.186 00:04:58.186 Suite: memory 00:04:58.186 Test: alloc and free memory map ...[2024-11-28 08:03:40.264225] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:58.186 passed 00:04:58.186 Test: mem map translation ...[2024-11-28 08:03:40.283764] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:58.186 [2024-11-28 08:03:40.283793] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:58.186 [2024-11-28 08:03:40.283829] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:58.186 [2024-11-28 08:03:40.283836] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:58.186 passed 00:04:58.186 Test: mem map registration ...[2024-11-28 08:03:40.322245] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:58.186 [2024-11-28 08:03:40.322259] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:58.186 passed 00:04:58.186 Test: mem map adjacent registrations ...passed 00:04:58.186 00:04:58.186 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.186 suites 1 1 n/a 0 0 00:04:58.186 tests 4 4 4 0 0 00:04:58.186 asserts 152 152 152 0 n/a 00:04:58.186 00:04:58.186 Elapsed time = 0.142 seconds 00:04:58.186 00:04:58.186 real 0m0.156s 00:04:58.186 user 0m0.147s 00:04:58.186 sys 0m0.008s 00:04:58.186 08:03:40 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.186 08:03:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:58.187 ************************************ 00:04:58.187 END TEST env_memory 00:04:58.187 ************************************ 00:04:58.187 08:03:40 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:58.187 08:03:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.187 08:03:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.187 08:03:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.187 ************************************ 00:04:58.187 START TEST env_vtophys 00:04:58.187 ************************************ 00:04:58.187 08:03:40 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:58.447 EAL: lib.eal log level changed from notice to debug 00:04:58.447 EAL: Detected lcore 0 as core 0 on socket 0 00:04:58.447 EAL: Detected lcore 1 as core 1 on socket 0 00:04:58.447 EAL: Detected lcore 2 as core 2 on socket 0 00:04:58.447 EAL: Detected lcore 3 as core 3 on socket 0 00:04:58.447 EAL: Detected lcore 4 as core 4 on socket 0 00:04:58.447 EAL: Detected lcore 5 as core 5 on socket 0 00:04:58.447 EAL: Detected lcore 6 as core 6 on socket 0 00:04:58.447 EAL: Detected lcore 7 as core 8 on socket 0 00:04:58.447 EAL: Detected lcore 8 as core 9 on socket 0 00:04:58.447 EAL: Detected lcore 9 as core 10 on socket 0 00:04:58.447 EAL: Detected lcore 10 as core 11 on socket 0 00:04:58.447 EAL: Detected lcore 11 as core 12 on socket 0 00:04:58.447 EAL: Detected lcore 12 as core 13 on socket 0 00:04:58.447 EAL: Detected lcore 13 as core 16 on socket 0 00:04:58.447 EAL: Detected lcore 14 as core 17 on socket 0 00:04:58.447 EAL: Detected lcore 15 as core 18 on socket 0 00:04:58.447 EAL: Detected lcore 16 as core 19 on socket 0 00:04:58.447 EAL: Detected lcore 17 as core 20 on socket 0 00:04:58.447 EAL: Detected lcore 18 as core 21 on socket 0 00:04:58.447 EAL: Detected lcore 19 as core 25 on socket 0 00:04:58.447 EAL: Detected lcore 20 as core 26 on socket 0 00:04:58.447 EAL: Detected lcore 21 as core 27 on socket 0 00:04:58.447 EAL: Detected lcore 22 as core 28 on socket 0 00:04:58.447 EAL: Detected lcore 23 as core 29 on socket 0 00:04:58.447 EAL: Detected lcore 24 as core 0 on socket 1 00:04:58.447 EAL: Detected lcore 25 as core 1 on socket 1 00:04:58.447 EAL: Detected lcore 26 as core 2 on socket 1 00:04:58.447 EAL: Detected lcore 27 as core 3 on socket 1 00:04:58.447 EAL: Detected lcore 28 as core 4 on socket 1 00:04:58.447 EAL: Detected lcore 29 as core 5 on socket 1 00:04:58.447 EAL: Detected lcore 30 as core 6 on socket 1 00:04:58.447 EAL: Detected lcore 31 as core 9 on socket 1 00:04:58.447 EAL: Detected lcore 32 as core 10 on socket 1 00:04:58.447 EAL: Detected lcore 33 as core 11 on socket 1 00:04:58.447 EAL: Detected lcore 34 as core 12 on socket 1 00:04:58.447 EAL: Detected lcore 35 as core 13 on socket 1 00:04:58.447 EAL: Detected lcore 36 as core 16 on socket 1 00:04:58.447 EAL: Detected lcore 37 as core 17 on socket 1 00:04:58.447 EAL: Detected lcore 38 as core 18 on socket 1 00:04:58.447 EAL: Detected lcore 39 as core 19 on socket 1 00:04:58.447 EAL: Detected lcore 40 as core 20 on socket 1 00:04:58.447 EAL: Detected lcore 41 as core 21 on socket 1 00:04:58.448 EAL: Detected lcore 42 as core 24 on socket 1 00:04:58.448 EAL: Detected lcore 43 as core 25 on socket 1 00:04:58.448 EAL: Detected lcore 44 as core 26 on socket 1 00:04:58.448 EAL: Detected lcore 45 as core 27 on socket 1 00:04:58.448 EAL: Detected lcore 46 as core 28 on socket 1 00:04:58.448 EAL: Detected lcore 47 as core 29 on socket 1 00:04:58.448 EAL: Detected lcore 48 as core 0 on socket 0 00:04:58.448 EAL: Detected lcore 49 as core 1 on socket 0 00:04:58.448 EAL: Detected lcore 50 as core 2 on socket 0 00:04:58.448 EAL: Detected lcore 51 as core 3 on socket 0 00:04:58.448 EAL: Detected lcore 52 as core 4 on socket 0 00:04:58.448 EAL: Detected lcore 53 as core 5 on socket 0 00:04:58.448 EAL: Detected lcore 54 as core 6 on socket 0 00:04:58.448 EAL: Detected lcore 55 as core 8 on socket 0 00:04:58.448 EAL: Detected lcore 56 as core 9 on socket 0 00:04:58.448 EAL: Detected lcore 57 as core 10 on socket 0 00:04:58.448 EAL: Detected lcore 58 as core 11 on socket 0 00:04:58.448 EAL: Detected lcore 59 as core 12 on socket 0 00:04:58.448 EAL: Detected lcore 60 as core 13 on socket 0 00:04:58.448 EAL: Detected lcore 61 as core 16 on socket 0 00:04:58.448 EAL: Detected lcore 62 as core 17 on socket 0 00:04:58.448 EAL: Detected lcore 63 as core 18 on socket 0 00:04:58.448 EAL: Detected lcore 64 as core 19 on socket 0 00:04:58.448 EAL: Detected lcore 65 as core 20 on socket 0 00:04:58.448 EAL: Detected lcore 66 as core 21 on socket 0 00:04:58.448 EAL: Detected lcore 67 as core 25 on socket 0 00:04:58.448 EAL: Detected lcore 68 as core 26 on socket 0 00:04:58.448 EAL: Detected lcore 69 as core 27 on socket 0 00:04:58.448 EAL: Detected lcore 70 as core 28 on socket 0 00:04:58.448 EAL: Detected lcore 71 as core 29 on socket 0 00:04:58.448 EAL: Detected lcore 72 as core 0 on socket 1 00:04:58.448 EAL: Detected lcore 73 as core 1 on socket 1 00:04:58.448 EAL: Detected lcore 74 as core 2 on socket 1 00:04:58.448 EAL: Detected lcore 75 as core 3 on socket 1 00:04:58.448 EAL: Detected lcore 76 as core 4 on socket 1 00:04:58.448 EAL: Detected lcore 77 as core 5 on socket 1 00:04:58.448 EAL: Detected lcore 78 as core 6 on socket 1 00:04:58.448 EAL: Detected lcore 79 as core 9 on socket 1 00:04:58.448 EAL: Detected lcore 80 as core 10 on socket 1 00:04:58.448 EAL: Detected lcore 81 as core 11 on socket 1 00:04:58.448 EAL: Detected lcore 82 as core 12 on socket 1 00:04:58.448 EAL: Detected lcore 83 as core 13 on socket 1 00:04:58.448 EAL: Detected lcore 84 as core 16 on socket 1 00:04:58.448 EAL: Detected lcore 85 as core 17 on socket 1 00:04:58.448 EAL: Detected lcore 86 as core 18 on socket 1 00:04:58.448 EAL: Detected lcore 87 as core 19 on socket 1 00:04:58.448 EAL: Detected lcore 88 as core 20 on socket 1 00:04:58.448 EAL: Detected lcore 89 as core 21 on socket 1 00:04:58.448 EAL: Detected lcore 90 as core 24 on socket 1 00:04:58.448 EAL: Detected lcore 91 as core 25 on socket 1 00:04:58.448 EAL: Detected lcore 92 as core 26 on socket 1 00:04:58.448 EAL: Detected lcore 93 as core 27 on socket 1 00:04:58.448 EAL: Detected lcore 94 as core 28 on socket 1 00:04:58.448 EAL: Detected lcore 95 as core 29 on socket 1 00:04:58.448 EAL: Maximum logical cores by configuration: 128 00:04:58.448 EAL: Detected CPU lcores: 96 00:04:58.448 EAL: Detected NUMA nodes: 2 00:04:58.448 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:58.448 EAL: Detected shared linkage of DPDK 00:04:58.448 EAL: No shared files mode enabled, IPC will be disabled 00:04:58.448 EAL: Bus pci wants IOVA as 'DC' 00:04:58.448 EAL: Buses did not request a specific IOVA mode. 00:04:58.448 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:58.448 EAL: Selected IOVA mode 'VA' 00:04:58.448 EAL: Probing VFIO support... 00:04:58.448 EAL: IOMMU type 1 (Type 1) is supported 00:04:58.448 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:58.448 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:58.448 EAL: VFIO support initialized 00:04:58.448 EAL: Ask a virtual area of 0x2e000 bytes 00:04:58.448 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:58.448 EAL: Setting up physically contiguous memory... 00:04:58.448 EAL: Setting maximum number of open files to 524288 00:04:58.448 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:58.448 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:58.448 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:58.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.448 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:58.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.448 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:58.448 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:58.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.448 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:58.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.448 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:58.448 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:58.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.448 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:58.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.448 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:58.448 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:58.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.448 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:58.448 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.448 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:58.448 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:58.448 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:58.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.448 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:58.448 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.448 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:58.448 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:58.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.448 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:58.448 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.448 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:58.448 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:58.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.448 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:58.448 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.448 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:58.448 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:58.448 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.448 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:58.448 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.448 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.448 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:58.448 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:58.448 EAL: Hugepages will be freed exactly as allocated. 00:04:58.448 EAL: No shared files mode enabled, IPC is disabled 00:04:58.448 EAL: No shared files mode enabled, IPC is disabled 00:04:58.448 EAL: TSC frequency is ~2300000 KHz 00:04:58.448 EAL: Main lcore 0 is ready (tid=7f648190aa00;cpuset=[0]) 00:04:58.448 EAL: Trying to obtain current memory policy. 00:04:58.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.448 EAL: Restoring previous memory policy: 0 00:04:58.448 EAL: request: mp_malloc_sync 00:04:58.448 EAL: No shared files mode enabled, IPC is disabled 00:04:58.448 EAL: Heap on socket 0 was expanded by 2MB 00:04:58.448 EAL: No shared files mode enabled, IPC is disabled 00:04:58.448 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:58.448 EAL: Mem event callback 'spdk:(nil)' registered 00:04:58.448 00:04:58.448 00:04:58.448 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.448 http://cunit.sourceforge.net/ 00:04:58.448 00:04:58.448 00:04:58.448 Suite: components_suite 00:04:58.448 Test: vtophys_malloc_test ...passed 00:04:58.448 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:58.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.448 EAL: Restoring previous memory policy: 4 00:04:58.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.448 EAL: request: mp_malloc_sync 00:04:58.448 EAL: No shared files mode enabled, IPC is disabled 00:04:58.448 EAL: Heap on socket 0 was expanded by 4MB 00:04:58.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.448 EAL: request: mp_malloc_sync 00:04:58.448 EAL: No shared files mode enabled, IPC is disabled 00:04:58.448 EAL: Heap on socket 0 was shrunk by 4MB 00:04:58.448 EAL: Trying to obtain current memory policy. 00:04:58.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.448 EAL: Restoring previous memory policy: 4 00:04:58.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.448 EAL: request: mp_malloc_sync 00:04:58.448 EAL: No shared files mode enabled, IPC is disabled 00:04:58.448 EAL: Heap on socket 0 was expanded by 6MB 00:04:58.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.448 EAL: request: mp_malloc_sync 00:04:58.448 EAL: No shared files mode enabled, IPC is disabled 00:04:58.448 EAL: Heap on socket 0 was shrunk by 6MB 00:04:58.448 EAL: Trying to obtain current memory policy. 00:04:58.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.448 EAL: Restoring previous memory policy: 4 00:04:58.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.448 EAL: request: mp_malloc_sync 00:04:58.448 EAL: No shared files mode enabled, IPC is disabled 00:04:58.448 EAL: Heap on socket 0 was expanded by 10MB 00:04:58.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.448 EAL: request: mp_malloc_sync 00:04:58.448 EAL: No shared files mode enabled, IPC is disabled 00:04:58.448 EAL: Heap on socket 0 was shrunk by 10MB 00:04:58.448 EAL: Trying to obtain current memory policy. 00:04:58.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.448 EAL: Restoring previous memory policy: 4 00:04:58.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.449 EAL: request: mp_malloc_sync 00:04:58.449 EAL: No shared files mode enabled, IPC is disabled 00:04:58.449 EAL: Heap on socket 0 was expanded by 18MB 00:04:58.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.449 EAL: request: mp_malloc_sync 00:04:58.449 EAL: No shared files mode enabled, IPC is disabled 00:04:58.449 EAL: Heap on socket 0 was shrunk by 18MB 00:04:58.449 EAL: Trying to obtain current memory policy. 00:04:58.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.449 EAL: Restoring previous memory policy: 4 00:04:58.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.449 EAL: request: mp_malloc_sync 00:04:58.449 EAL: No shared files mode enabled, IPC is disabled 00:04:58.449 EAL: Heap on socket 0 was expanded by 34MB 00:04:58.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.449 EAL: request: mp_malloc_sync 00:04:58.449 EAL: No shared files mode enabled, IPC is disabled 00:04:58.449 EAL: Heap on socket 0 was shrunk by 34MB 00:04:58.449 EAL: Trying to obtain current memory policy. 00:04:58.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.449 EAL: Restoring previous memory policy: 4 00:04:58.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.449 EAL: request: mp_malloc_sync 00:04:58.449 EAL: No shared files mode enabled, IPC is disabled 00:04:58.449 EAL: Heap on socket 0 was expanded by 66MB 00:04:58.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.449 EAL: request: mp_malloc_sync 00:04:58.449 EAL: No shared files mode enabled, IPC is disabled 00:04:58.449 EAL: Heap on socket 0 was shrunk by 66MB 00:04:58.449 EAL: Trying to obtain current memory policy. 00:04:58.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.449 EAL: Restoring previous memory policy: 4 00:04:58.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.449 EAL: request: mp_malloc_sync 00:04:58.449 EAL: No shared files mode enabled, IPC is disabled 00:04:58.449 EAL: Heap on socket 0 was expanded by 130MB 00:04:58.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.449 EAL: request: mp_malloc_sync 00:04:58.449 EAL: No shared files mode enabled, IPC is disabled 00:04:58.449 EAL: Heap on socket 0 was shrunk by 130MB 00:04:58.449 EAL: Trying to obtain current memory policy. 00:04:58.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.449 EAL: Restoring previous memory policy: 4 00:04:58.449 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.449 EAL: request: mp_malloc_sync 00:04:58.449 EAL: No shared files mode enabled, IPC is disabled 00:04:58.449 EAL: Heap on socket 0 was expanded by 258MB 00:04:58.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.709 EAL: request: mp_malloc_sync 00:04:58.709 EAL: No shared files mode enabled, IPC is disabled 00:04:58.709 EAL: Heap on socket 0 was shrunk by 258MB 00:04:58.709 EAL: Trying to obtain current memory policy. 00:04:58.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.709 EAL: Restoring previous memory policy: 4 00:04:58.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.709 EAL: request: mp_malloc_sync 00:04:58.709 EAL: No shared files mode enabled, IPC is disabled 00:04:58.709 EAL: Heap on socket 0 was expanded by 514MB 00:04:58.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.969 EAL: request: mp_malloc_sync 00:04:58.969 EAL: No shared files mode enabled, IPC is disabled 00:04:58.969 EAL: Heap on socket 0 was shrunk by 514MB 00:04:58.969 EAL: Trying to obtain current memory policy. 00:04:58.969 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.969 EAL: Restoring previous memory policy: 4 00:04:58.969 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.969 EAL: request: mp_malloc_sync 00:04:58.969 EAL: No shared files mode enabled, IPC is disabled 00:04:58.969 EAL: Heap on socket 0 was expanded by 1026MB 00:04:59.228 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.489 EAL: request: mp_malloc_sync 00:04:59.489 EAL: No shared files mode enabled, IPC is disabled 00:04:59.489 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:59.489 passed 00:04:59.489 00:04:59.489 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.489 suites 1 1 n/a 0 0 00:04:59.489 tests 2 2 2 0 0 00:04:59.489 asserts 497 497 497 0 n/a 00:04:59.489 00:04:59.489 Elapsed time = 0.970 seconds 00:04:59.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.489 EAL: request: mp_malloc_sync 00:04:59.489 EAL: No shared files mode enabled, IPC is disabled 00:04:59.489 EAL: Heap on socket 0 was shrunk by 2MB 00:04:59.489 EAL: No shared files mode enabled, IPC is disabled 00:04:59.489 EAL: No shared files mode enabled, IPC is disabled 00:04:59.489 EAL: No shared files mode enabled, IPC is disabled 00:04:59.489 00:04:59.489 real 0m1.091s 00:04:59.489 user 0m0.646s 00:04:59.489 sys 0m0.415s 00:04:59.489 08:03:41 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.489 08:03:41 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:59.489 ************************************ 00:04:59.489 END TEST env_vtophys 00:04:59.489 ************************************ 00:04:59.489 08:03:41 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:59.489 08:03:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.489 08:03:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.489 08:03:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.489 ************************************ 00:04:59.489 START TEST env_pci 00:04:59.489 ************************************ 00:04:59.489 08:03:41 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:59.489 00:04:59.489 00:04:59.489 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.489 http://cunit.sourceforge.net/ 00:04:59.489 00:04:59.489 00:04:59.489 Suite: pci 00:04:59.489 Test: pci_hook ...[2024-11-28 08:03:41.619474] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1166684 has claimed it 00:04:59.489 EAL: Cannot find device (10000:00:01.0) 00:04:59.489 EAL: Failed to attach device on primary process 00:04:59.489 passed 00:04:59.489 00:04:59.489 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.489 suites 1 1 n/a 0 0 00:04:59.489 tests 1 1 1 0 0 00:04:59.489 asserts 25 25 25 0 n/a 00:04:59.489 00:04:59.489 Elapsed time = 0.026 seconds 00:04:59.489 00:04:59.489 real 0m0.046s 00:04:59.489 user 0m0.010s 00:04:59.489 sys 0m0.035s 00:04:59.489 08:03:41 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.489 08:03:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:59.489 ************************************ 00:04:59.489 END TEST env_pci 00:04:59.489 ************************************ 00:04:59.489 08:03:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:59.489 08:03:41 env -- env/env.sh@15 -- # uname 00:04:59.489 08:03:41 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:59.489 08:03:41 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:59.489 08:03:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.489 08:03:41 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:59.489 08:03:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.489 08:03:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.489 ************************************ 00:04:59.489 START TEST env_dpdk_post_init 00:04:59.489 ************************************ 00:04:59.489 08:03:41 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.489 EAL: Detected CPU lcores: 96 00:04:59.489 EAL: Detected NUMA nodes: 2 00:04:59.489 EAL: Detected shared linkage of DPDK 00:04:59.489 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.749 EAL: Selected IOVA mode 'VA' 00:04:59.749 EAL: VFIO support initialized 00:04:59.749 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.749 EAL: Using IOMMU type 1 (Type 1) 00:04:59.749 EAL: Ignore mapping IO port bar(1) 00:04:59.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:59.749 EAL: Ignore mapping IO port bar(1) 00:04:59.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:59.749 EAL: Ignore mapping IO port bar(1) 00:04:59.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:59.749 EAL: Ignore mapping IO port bar(1) 00:04:59.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:59.749 EAL: Ignore mapping IO port bar(1) 00:04:59.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:59.749 EAL: Ignore mapping IO port bar(1) 00:04:59.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:59.749 EAL: Ignore mapping IO port bar(1) 00:04:59.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:59.749 EAL: Ignore mapping IO port bar(1) 00:04:59.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:00.688 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:00.688 EAL: Ignore mapping IO port bar(1) 00:05:00.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:00.688 EAL: Ignore mapping IO port bar(1) 00:05:00.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:00.688 EAL: Ignore mapping IO port bar(1) 00:05:00.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:00.688 EAL: Ignore mapping IO port bar(1) 00:05:00.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:00.688 EAL: Ignore mapping IO port bar(1) 00:05:00.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:00.688 EAL: Ignore mapping IO port bar(1) 00:05:00.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:00.688 EAL: Ignore mapping IO port bar(1) 00:05:00.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:00.688 EAL: Ignore mapping IO port bar(1) 00:05:00.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:03.979 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:03.979 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:03.979 Starting DPDK initialization... 00:05:03.979 Starting SPDK post initialization... 00:05:03.979 SPDK NVMe probe 00:05:03.979 Attaching to 0000:5e:00.0 00:05:03.979 Attached to 0000:5e:00.0 00:05:03.979 Cleaning up... 00:05:03.979 00:05:03.979 real 0m4.313s 00:05:03.979 user 0m2.965s 00:05:03.979 sys 0m0.426s 00:05:03.979 08:03:46 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.979 08:03:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.979 ************************************ 00:05:03.979 END TEST env_dpdk_post_init 00:05:03.979 ************************************ 00:05:03.979 08:03:46 env -- env/env.sh@26 -- # uname 00:05:03.979 08:03:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:03.979 08:03:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:03.979 08:03:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.979 08:03:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.979 08:03:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.979 ************************************ 00:05:03.979 START TEST env_mem_callbacks 00:05:03.979 ************************************ 00:05:03.979 08:03:46 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:03.979 EAL: Detected CPU lcores: 96 00:05:03.979 EAL: Detected NUMA nodes: 2 00:05:03.979 EAL: Detected shared linkage of DPDK 00:05:03.979 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:03.979 EAL: Selected IOVA mode 'VA' 00:05:03.979 EAL: VFIO support initialized 00:05:03.979 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:03.979 00:05:03.979 00:05:03.979 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.980 http://cunit.sourceforge.net/ 00:05:03.980 00:05:03.980 00:05:03.980 Suite: memory 00:05:03.980 Test: test ... 00:05:03.980 register 0x200000200000 2097152 00:05:03.980 malloc 3145728 00:05:03.980 register 0x200000400000 4194304 00:05:03.980 buf 0x200000500000 len 3145728 PASSED 00:05:03.980 malloc 64 00:05:03.980 buf 0x2000004fff40 len 64 PASSED 00:05:03.980 malloc 4194304 00:05:03.980 register 0x200000800000 6291456 00:05:03.980 buf 0x200000a00000 len 4194304 PASSED 00:05:03.980 free 0x200000500000 3145728 00:05:03.980 free 0x2000004fff40 64 00:05:03.980 unregister 0x200000400000 4194304 PASSED 00:05:03.980 free 0x200000a00000 4194304 00:05:03.980 unregister 0x200000800000 6291456 PASSED 00:05:03.980 malloc 8388608 00:05:03.980 register 0x200000400000 10485760 00:05:03.980 buf 0x200000600000 len 8388608 PASSED 00:05:03.980 free 0x200000600000 8388608 00:05:03.980 unregister 0x200000400000 10485760 PASSED 00:05:03.980 passed 00:05:03.980 00:05:03.980 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.980 suites 1 1 n/a 0 0 00:05:03.980 tests 1 1 1 0 0 00:05:03.980 asserts 15 15 15 0 n/a 00:05:03.980 00:05:03.980 Elapsed time = 0.005 seconds 00:05:03.980 00:05:03.980 real 0m0.052s 00:05:03.980 user 0m0.018s 00:05:03.980 sys 0m0.034s 00:05:03.980 08:03:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.980 08:03:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:03.980 ************************************ 00:05:03.980 END TEST env_mem_callbacks 00:05:03.980 ************************************ 00:05:03.980 00:05:03.980 real 0m6.164s 00:05:03.980 user 0m4.031s 00:05:03.980 sys 0m1.206s 00:05:03.980 08:03:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.980 08:03:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.980 ************************************ 00:05:03.980 END TEST env 00:05:03.980 ************************************ 00:05:03.980 08:03:46 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:03.980 08:03:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.980 08:03:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.980 08:03:46 -- common/autotest_common.sh@10 -- # set +x 00:05:04.240 ************************************ 00:05:04.240 START TEST rpc 00:05:04.240 ************************************ 00:05:04.240 08:03:46 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:04.240 * Looking for test storage... 00:05:04.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:04.240 08:03:46 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.240 08:03:46 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.240 08:03:46 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.240 08:03:46 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.240 08:03:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.240 08:03:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.240 08:03:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.240 08:03:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.240 08:03:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.240 08:03:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.240 08:03:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.240 08:03:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.240 08:03:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.240 08:03:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.240 08:03:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.240 08:03:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:04.240 08:03:46 rpc -- scripts/common.sh@345 -- # : 1 00:05:04.240 08:03:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.240 08:03:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.240 08:03:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:04.240 08:03:46 rpc -- scripts/common.sh@353 -- # local d=1 00:05:04.240 08:03:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.240 08:03:46 rpc -- scripts/common.sh@355 -- # echo 1 00:05:04.240 08:03:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.240 08:03:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:04.240 08:03:46 rpc -- scripts/common.sh@353 -- # local d=2 00:05:04.240 08:03:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.240 08:03:46 rpc -- scripts/common.sh@355 -- # echo 2 00:05:04.240 08:03:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.240 08:03:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.240 08:03:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.240 08:03:46 rpc -- scripts/common.sh@368 -- # return 0 00:05:04.240 08:03:46 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.240 08:03:46 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.240 --rc genhtml_branch_coverage=1 00:05:04.240 --rc genhtml_function_coverage=1 00:05:04.240 --rc genhtml_legend=1 00:05:04.240 --rc geninfo_all_blocks=1 00:05:04.240 --rc geninfo_unexecuted_blocks=1 00:05:04.240 00:05:04.240 ' 00:05:04.240 08:03:46 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.240 --rc genhtml_branch_coverage=1 00:05:04.240 --rc genhtml_function_coverage=1 00:05:04.240 --rc genhtml_legend=1 00:05:04.240 --rc geninfo_all_blocks=1 00:05:04.240 --rc geninfo_unexecuted_blocks=1 00:05:04.240 00:05:04.240 ' 00:05:04.240 08:03:46 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.240 --rc genhtml_branch_coverage=1 00:05:04.240 --rc genhtml_function_coverage=1 00:05:04.240 --rc genhtml_legend=1 00:05:04.240 --rc geninfo_all_blocks=1 00:05:04.240 --rc geninfo_unexecuted_blocks=1 00:05:04.240 00:05:04.240 ' 00:05:04.240 08:03:46 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.240 --rc genhtml_branch_coverage=1 00:05:04.240 --rc genhtml_function_coverage=1 00:05:04.240 --rc genhtml_legend=1 00:05:04.240 --rc geninfo_all_blocks=1 00:05:04.240 --rc geninfo_unexecuted_blocks=1 00:05:04.240 00:05:04.241 ' 00:05:04.241 08:03:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1167515 00:05:04.241 08:03:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.241 08:03:46 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:04.241 08:03:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1167515 00:05:04.241 08:03:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 1167515 ']' 00:05:04.241 08:03:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.241 08:03:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.241 08:03:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.241 08:03:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.241 08:03:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.241 [2024-11-28 08:03:46.476606] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:04.241 [2024-11-28 08:03:46.476652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167515 ] 00:05:04.500 [2024-11-28 08:03:46.538617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.500 [2024-11-28 08:03:46.577944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:04.500 [2024-11-28 08:03:46.577986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1167515' to capture a snapshot of events at runtime. 00:05:04.500 [2024-11-28 08:03:46.577996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:04.500 [2024-11-28 08:03:46.578003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:04.500 [2024-11-28 08:03:46.578008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1167515 for offline analysis/debug. 00:05:04.500 [2024-11-28 08:03:46.578571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.760 08:03:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.760 08:03:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.760 08:03:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:04.760 08:03:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:04.760 08:03:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:04.761 08:03:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:04.761 08:03:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.761 08:03:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.761 08:03:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.761 ************************************ 00:05:04.761 START TEST rpc_integrity 00:05:04.761 ************************************ 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.761 { 00:05:04.761 "name": "Malloc0", 00:05:04.761 "aliases": [ 00:05:04.761 "18786092-940d-423a-8027-696a85f136c7" 00:05:04.761 ], 00:05:04.761 "product_name": "Malloc disk", 00:05:04.761 "block_size": 512, 00:05:04.761 "num_blocks": 16384, 00:05:04.761 "uuid": "18786092-940d-423a-8027-696a85f136c7", 00:05:04.761 "assigned_rate_limits": { 00:05:04.761 "rw_ios_per_sec": 0, 00:05:04.761 "rw_mbytes_per_sec": 0, 00:05:04.761 "r_mbytes_per_sec": 0, 00:05:04.761 "w_mbytes_per_sec": 0 00:05:04.761 }, 00:05:04.761 "claimed": false, 00:05:04.761 "zoned": false, 00:05:04.761 "supported_io_types": { 00:05:04.761 "read": true, 00:05:04.761 "write": true, 00:05:04.761 "unmap": true, 00:05:04.761 "flush": true, 00:05:04.761 "reset": true, 00:05:04.761 "nvme_admin": false, 00:05:04.761 "nvme_io": false, 00:05:04.761 "nvme_io_md": false, 00:05:04.761 "write_zeroes": true, 00:05:04.761 "zcopy": true, 00:05:04.761 "get_zone_info": false, 00:05:04.761 "zone_management": false, 00:05:04.761 "zone_append": false, 00:05:04.761 "compare": false, 00:05:04.761 "compare_and_write": false, 00:05:04.761 "abort": true, 00:05:04.761 "seek_hole": false, 00:05:04.761 "seek_data": false, 00:05:04.761 "copy": true, 00:05:04.761 "nvme_iov_md": false 00:05:04.761 }, 00:05:04.761 "memory_domains": [ 00:05:04.761 { 00:05:04.761 "dma_device_id": "system", 00:05:04.761 "dma_device_type": 1 00:05:04.761 }, 00:05:04.761 { 00:05:04.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.761 "dma_device_type": 2 00:05:04.761 } 00:05:04.761 ], 00:05:04.761 "driver_specific": {} 00:05:04.761 } 00:05:04.761 ]' 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.761 [2024-11-28 08:03:46.946724] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:04.761 [2024-11-28 08:03:46.946755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.761 [2024-11-28 08:03:46.946767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb38280 00:05:04.761 [2024-11-28 08:03:46.946773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.761 [2024-11-28 08:03:46.947883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.761 [2024-11-28 08:03:46.947907] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.761 Passthru0 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.761 08:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.761 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.761 { 00:05:04.761 "name": "Malloc0", 00:05:04.761 "aliases": [ 00:05:04.761 "18786092-940d-423a-8027-696a85f136c7" 00:05:04.761 ], 00:05:04.761 "product_name": "Malloc disk", 00:05:04.761 "block_size": 512, 00:05:04.761 "num_blocks": 16384, 00:05:04.761 "uuid": "18786092-940d-423a-8027-696a85f136c7", 00:05:04.761 "assigned_rate_limits": { 00:05:04.761 "rw_ios_per_sec": 0, 00:05:04.761 "rw_mbytes_per_sec": 0, 00:05:04.761 "r_mbytes_per_sec": 0, 00:05:04.761 "w_mbytes_per_sec": 0 00:05:04.761 }, 00:05:04.761 "claimed": true, 00:05:04.761 "claim_type": "exclusive_write", 00:05:04.761 "zoned": false, 00:05:04.761 "supported_io_types": { 00:05:04.761 "read": true, 00:05:04.761 "write": true, 00:05:04.761 "unmap": true, 00:05:04.761 "flush": true, 00:05:04.761 "reset": true, 00:05:04.761 "nvme_admin": false, 00:05:04.761 "nvme_io": false, 00:05:04.761 "nvme_io_md": false, 00:05:04.761 "write_zeroes": true, 00:05:04.761 "zcopy": true, 00:05:04.761 "get_zone_info": false, 00:05:04.761 "zone_management": false, 00:05:04.761 "zone_append": false, 00:05:04.761 "compare": false, 00:05:04.761 "compare_and_write": false, 00:05:04.761 "abort": true, 00:05:04.761 "seek_hole": false, 00:05:04.761 "seek_data": false, 00:05:04.761 "copy": true, 00:05:04.761 "nvme_iov_md": false 00:05:04.761 }, 00:05:04.761 "memory_domains": [ 00:05:04.761 { 00:05:04.761 "dma_device_id": "system", 00:05:04.761 "dma_device_type": 1 00:05:04.761 }, 00:05:04.761 { 00:05:04.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.761 "dma_device_type": 2 00:05:04.761 } 00:05:04.761 ], 00:05:04.761 "driver_specific": {} 00:05:04.761 }, 00:05:04.761 { 00:05:04.761 "name": "Passthru0", 00:05:04.761 "aliases": [ 00:05:04.761 "6acff6dd-77f7-57f0-9d60-1679c2d03288" 00:05:04.761 ], 00:05:04.761 "product_name": "passthru", 00:05:04.761 "block_size": 512, 00:05:04.761 "num_blocks": 16384, 00:05:04.762 "uuid": "6acff6dd-77f7-57f0-9d60-1679c2d03288", 00:05:04.762 "assigned_rate_limits": { 00:05:04.762 "rw_ios_per_sec": 0, 00:05:04.762 "rw_mbytes_per_sec": 0, 00:05:04.762 "r_mbytes_per_sec": 0, 00:05:04.762 "w_mbytes_per_sec": 0 00:05:04.762 }, 00:05:04.762 "claimed": false, 00:05:04.762 "zoned": false, 00:05:04.762 "supported_io_types": { 00:05:04.762 "read": true, 00:05:04.762 "write": true, 00:05:04.762 "unmap": true, 00:05:04.762 "flush": true, 00:05:04.762 "reset": true, 00:05:04.762 "nvme_admin": false, 00:05:04.762 "nvme_io": false, 00:05:04.762 "nvme_io_md": false, 00:05:04.762 "write_zeroes": true, 00:05:04.762 "zcopy": true, 00:05:04.762 "get_zone_info": false, 00:05:04.762 "zone_management": false, 00:05:04.762 "zone_append": false, 00:05:04.762 "compare": false, 00:05:04.762 "compare_and_write": false, 00:05:04.762 "abort": true, 00:05:04.762 "seek_hole": false, 00:05:04.762 "seek_data": false, 00:05:04.762 "copy": true, 00:05:04.762 "nvme_iov_md": false 00:05:04.762 }, 00:05:04.762 "memory_domains": [ 00:05:04.762 { 00:05:04.762 "dma_device_id": "system", 00:05:04.762 "dma_device_type": 1 00:05:04.762 }, 00:05:04.762 { 00:05:04.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.762 "dma_device_type": 2 00:05:04.762 } 00:05:04.762 ], 00:05:04.762 "driver_specific": { 00:05:04.762 "passthru": { 00:05:04.762 "name": "Passthru0", 00:05:04.762 "base_bdev_name": "Malloc0" 00:05:04.762 } 00:05:04.762 } 00:05:04.762 } 00:05:04.762 ]' 00:05:04.762 08:03:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.762 08:03:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.762 08:03:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.762 08:03:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.762 08:03:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.762 08:03:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.762 08:03:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:04.762 08:03:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.762 08:03:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.762 08:03:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.762 08:03:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.762 08:03:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.762 08:03:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.021 08:03:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.021 08:03:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:05.021 08:03:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:05.021 08:03:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:05.021 00:05:05.021 real 0m0.250s 00:05:05.021 user 0m0.153s 00:05:05.021 sys 0m0.038s 00:05:05.021 08:03:47 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.021 08:03:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.021 ************************************ 00:05:05.021 END TEST rpc_integrity 00:05:05.021 ************************************ 00:05:05.021 08:03:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:05.021 08:03:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.021 08:03:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.021 08:03:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.021 ************************************ 00:05:05.021 START TEST rpc_plugins 00:05:05.021 ************************************ 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:05.021 08:03:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.021 08:03:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:05.021 08:03:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.021 08:03:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:05.021 { 00:05:05.021 "name": "Malloc1", 00:05:05.021 "aliases": [ 00:05:05.021 "fab0af31-c54f-47d8-a9ed-1a798b30bdfd" 00:05:05.021 ], 00:05:05.021 "product_name": "Malloc disk", 00:05:05.021 "block_size": 4096, 00:05:05.021 "num_blocks": 256, 00:05:05.021 "uuid": "fab0af31-c54f-47d8-a9ed-1a798b30bdfd", 00:05:05.021 "assigned_rate_limits": { 00:05:05.021 "rw_ios_per_sec": 0, 00:05:05.021 "rw_mbytes_per_sec": 0, 00:05:05.021 "r_mbytes_per_sec": 0, 00:05:05.021 "w_mbytes_per_sec": 0 00:05:05.021 }, 00:05:05.021 "claimed": false, 00:05:05.021 "zoned": false, 00:05:05.021 "supported_io_types": { 00:05:05.021 "read": true, 00:05:05.021 "write": true, 00:05:05.021 "unmap": true, 00:05:05.021 "flush": true, 00:05:05.021 "reset": true, 00:05:05.021 "nvme_admin": false, 00:05:05.021 "nvme_io": false, 00:05:05.021 "nvme_io_md": false, 00:05:05.021 "write_zeroes": true, 00:05:05.021 "zcopy": true, 00:05:05.021 "get_zone_info": false, 00:05:05.021 "zone_management": false, 00:05:05.021 "zone_append": false, 00:05:05.021 "compare": false, 00:05:05.021 "compare_and_write": false, 00:05:05.021 "abort": true, 00:05:05.021 "seek_hole": false, 00:05:05.021 "seek_data": false, 00:05:05.021 "copy": true, 00:05:05.021 "nvme_iov_md": false 00:05:05.021 }, 00:05:05.021 "memory_domains": [ 00:05:05.021 { 00:05:05.021 "dma_device_id": "system", 00:05:05.021 "dma_device_type": 1 00:05:05.021 }, 00:05:05.021 { 00:05:05.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.021 "dma_device_type": 2 00:05:05.021 } 00:05:05.021 ], 00:05:05.021 "driver_specific": {} 00:05:05.021 } 00:05:05.021 ]' 00:05:05.021 08:03:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:05.021 08:03:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:05.021 08:03:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.021 08:03:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.021 08:03:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:05.021 08:03:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:05.021 08:03:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:05.021 00:05:05.021 real 0m0.118s 00:05:05.021 user 0m0.066s 00:05:05.021 sys 0m0.014s 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.021 08:03:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:05.021 ************************************ 00:05:05.021 END TEST rpc_plugins 00:05:05.021 ************************************ 00:05:05.280 08:03:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:05.280 08:03:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.280 08:03:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.280 08:03:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.280 ************************************ 00:05:05.280 START TEST rpc_trace_cmd_test 00:05:05.280 ************************************ 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:05.280 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1167515", 00:05:05.280 "tpoint_group_mask": "0x8", 00:05:05.280 "iscsi_conn": { 00:05:05.280 "mask": "0x2", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "scsi": { 00:05:05.280 "mask": "0x4", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "bdev": { 00:05:05.280 "mask": "0x8", 00:05:05.280 "tpoint_mask": "0xffffffffffffffff" 00:05:05.280 }, 00:05:05.280 "nvmf_rdma": { 00:05:05.280 "mask": "0x10", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "nvmf_tcp": { 00:05:05.280 "mask": "0x20", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "ftl": { 00:05:05.280 "mask": "0x40", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "blobfs": { 00:05:05.280 "mask": "0x80", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "dsa": { 00:05:05.280 "mask": "0x200", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "thread": { 00:05:05.280 "mask": "0x400", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "nvme_pcie": { 00:05:05.280 "mask": "0x800", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "iaa": { 00:05:05.280 "mask": "0x1000", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "nvme_tcp": { 00:05:05.280 "mask": "0x2000", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "bdev_nvme": { 00:05:05.280 "mask": "0x4000", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "sock": { 00:05:05.280 "mask": "0x8000", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "blob": { 00:05:05.280 "mask": "0x10000", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "bdev_raid": { 00:05:05.280 "mask": "0x20000", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 }, 00:05:05.280 "scheduler": { 00:05:05.280 "mask": "0x40000", 00:05:05.280 "tpoint_mask": "0x0" 00:05:05.280 } 00:05:05.280 }' 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:05.280 00:05:05.280 real 0m0.197s 00:05:05.280 user 0m0.160s 00:05:05.280 sys 0m0.027s 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.280 08:03:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:05.280 ************************************ 00:05:05.280 END TEST rpc_trace_cmd_test 00:05:05.280 ************************************ 00:05:05.539 08:03:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:05.539 08:03:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:05.539 08:03:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:05.539 08:03:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.539 08:03:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.539 08:03:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.539 ************************************ 00:05:05.539 START TEST rpc_daemon_integrity 00:05:05.539 ************************************ 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:05.539 { 00:05:05.539 "name": "Malloc2", 00:05:05.539 "aliases": [ 00:05:05.539 "86737e53-df64-49df-a05b-7a9ffe161a36" 00:05:05.539 ], 00:05:05.539 "product_name": "Malloc disk", 00:05:05.539 "block_size": 512, 00:05:05.539 "num_blocks": 16384, 00:05:05.539 "uuid": "86737e53-df64-49df-a05b-7a9ffe161a36", 00:05:05.539 "assigned_rate_limits": { 00:05:05.539 "rw_ios_per_sec": 0, 00:05:05.539 "rw_mbytes_per_sec": 0, 00:05:05.539 "r_mbytes_per_sec": 0, 00:05:05.539 "w_mbytes_per_sec": 0 00:05:05.539 }, 00:05:05.539 "claimed": false, 00:05:05.539 "zoned": false, 00:05:05.539 "supported_io_types": { 00:05:05.539 "read": true, 00:05:05.539 "write": true, 00:05:05.539 "unmap": true, 00:05:05.539 "flush": true, 00:05:05.539 "reset": true, 00:05:05.539 "nvme_admin": false, 00:05:05.539 "nvme_io": false, 00:05:05.539 "nvme_io_md": false, 00:05:05.539 "write_zeroes": true, 00:05:05.539 "zcopy": true, 00:05:05.539 "get_zone_info": false, 00:05:05.539 "zone_management": false, 00:05:05.539 "zone_append": false, 00:05:05.539 "compare": false, 00:05:05.539 "compare_and_write": false, 00:05:05.539 "abort": true, 00:05:05.539 "seek_hole": false, 00:05:05.539 "seek_data": false, 00:05:05.539 "copy": true, 00:05:05.539 "nvme_iov_md": false 00:05:05.539 }, 00:05:05.539 "memory_domains": [ 00:05:05.539 { 00:05:05.539 "dma_device_id": "system", 00:05:05.539 "dma_device_type": 1 00:05:05.539 }, 00:05:05.539 { 00:05:05.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.539 "dma_device_type": 2 00:05:05.539 } 00:05:05.539 ], 00:05:05.539 "driver_specific": {} 00:05:05.539 } 00:05:05.539 ]' 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.539 [2024-11-28 08:03:47.720847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:05.539 [2024-11-28 08:03:47.720874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:05.539 [2024-11-28 08:03:47.720886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb3a150 00:05:05.539 [2024-11-28 08:03:47.720892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:05.539 [2024-11-28 08:03:47.721894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:05.539 [2024-11-28 08:03:47.721916] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:05.539 Passthru0 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.539 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:05.539 { 00:05:05.539 "name": "Malloc2", 00:05:05.539 "aliases": [ 00:05:05.539 "86737e53-df64-49df-a05b-7a9ffe161a36" 00:05:05.539 ], 00:05:05.539 "product_name": "Malloc disk", 00:05:05.539 "block_size": 512, 00:05:05.539 "num_blocks": 16384, 00:05:05.539 "uuid": "86737e53-df64-49df-a05b-7a9ffe161a36", 00:05:05.539 "assigned_rate_limits": { 00:05:05.539 "rw_ios_per_sec": 0, 00:05:05.539 "rw_mbytes_per_sec": 0, 00:05:05.539 "r_mbytes_per_sec": 0, 00:05:05.539 "w_mbytes_per_sec": 0 00:05:05.539 }, 00:05:05.539 "claimed": true, 00:05:05.539 "claim_type": "exclusive_write", 00:05:05.539 "zoned": false, 00:05:05.539 "supported_io_types": { 00:05:05.539 "read": true, 00:05:05.539 "write": true, 00:05:05.539 "unmap": true, 00:05:05.539 "flush": true, 00:05:05.539 "reset": true, 00:05:05.539 "nvme_admin": false, 00:05:05.539 "nvme_io": false, 00:05:05.539 "nvme_io_md": false, 00:05:05.539 "write_zeroes": true, 00:05:05.539 "zcopy": true, 00:05:05.539 "get_zone_info": false, 00:05:05.539 "zone_management": false, 00:05:05.539 "zone_append": false, 00:05:05.539 "compare": false, 00:05:05.539 "compare_and_write": false, 00:05:05.539 "abort": true, 00:05:05.539 "seek_hole": false, 00:05:05.539 "seek_data": false, 00:05:05.539 "copy": true, 00:05:05.539 "nvme_iov_md": false 00:05:05.539 }, 00:05:05.539 "memory_domains": [ 00:05:05.539 { 00:05:05.539 "dma_device_id": "system", 00:05:05.539 "dma_device_type": 1 00:05:05.539 }, 00:05:05.539 { 00:05:05.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.540 "dma_device_type": 2 00:05:05.540 } 00:05:05.540 ], 00:05:05.540 "driver_specific": {} 00:05:05.540 }, 00:05:05.540 { 00:05:05.540 "name": "Passthru0", 00:05:05.540 "aliases": [ 00:05:05.540 "6928be5c-dcd4-53c2-a29f-5b06a2effda6" 00:05:05.540 ], 00:05:05.540 "product_name": "passthru", 00:05:05.540 "block_size": 512, 00:05:05.540 "num_blocks": 16384, 00:05:05.540 "uuid": "6928be5c-dcd4-53c2-a29f-5b06a2effda6", 00:05:05.540 "assigned_rate_limits": { 00:05:05.540 "rw_ios_per_sec": 0, 00:05:05.540 "rw_mbytes_per_sec": 0, 00:05:05.540 "r_mbytes_per_sec": 0, 00:05:05.540 "w_mbytes_per_sec": 0 00:05:05.540 }, 00:05:05.540 "claimed": false, 00:05:05.540 "zoned": false, 00:05:05.540 "supported_io_types": { 00:05:05.540 "read": true, 00:05:05.540 "write": true, 00:05:05.540 "unmap": true, 00:05:05.540 "flush": true, 00:05:05.540 "reset": true, 00:05:05.540 "nvme_admin": false, 00:05:05.540 "nvme_io": false, 00:05:05.540 "nvme_io_md": false, 00:05:05.540 "write_zeroes": true, 00:05:05.540 "zcopy": true, 00:05:05.540 "get_zone_info": false, 00:05:05.540 "zone_management": false, 00:05:05.540 "zone_append": false, 00:05:05.540 "compare": false, 00:05:05.540 "compare_and_write": false, 00:05:05.540 "abort": true, 00:05:05.540 "seek_hole": false, 00:05:05.540 "seek_data": false, 00:05:05.540 "copy": true, 00:05:05.540 "nvme_iov_md": false 00:05:05.540 }, 00:05:05.540 "memory_domains": [ 00:05:05.540 { 00:05:05.540 "dma_device_id": "system", 00:05:05.540 "dma_device_type": 1 00:05:05.540 }, 00:05:05.540 { 00:05:05.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.540 "dma_device_type": 2 00:05:05.540 } 00:05:05.540 ], 00:05:05.540 "driver_specific": { 00:05:05.540 "passthru": { 00:05:05.540 "name": "Passthru0", 00:05:05.540 "base_bdev_name": "Malloc2" 00:05:05.540 } 00:05:05.540 } 00:05:05.540 } 00:05:05.540 ]' 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.540 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.799 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.799 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:05.799 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:05.799 08:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:05.799 00:05:05.799 real 0m0.238s 00:05:05.799 user 0m0.140s 00:05:05.799 sys 0m0.030s 00:05:05.799 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.799 08:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.799 ************************************ 00:05:05.799 END TEST rpc_daemon_integrity 00:05:05.799 ************************************ 00:05:05.799 08:03:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:05.799 08:03:47 rpc -- rpc/rpc.sh@84 -- # killprocess 1167515 00:05:05.799 08:03:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 1167515 ']' 00:05:05.799 08:03:47 rpc -- common/autotest_common.sh@958 -- # kill -0 1167515 00:05:05.799 08:03:47 rpc -- common/autotest_common.sh@959 -- # uname 00:05:05.799 08:03:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.799 08:03:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1167515 00:05:05.799 08:03:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.799 08:03:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.799 08:03:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1167515' 00:05:05.799 killing process with pid 1167515 00:05:05.799 08:03:47 rpc -- common/autotest_common.sh@973 -- # kill 1167515 00:05:05.799 08:03:47 rpc -- common/autotest_common.sh@978 -- # wait 1167515 00:05:06.058 00:05:06.058 real 0m1.982s 00:05:06.058 user 0m2.459s 00:05:06.058 sys 0m0.681s 00:05:06.058 08:03:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.058 08:03:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.058 ************************************ 00:05:06.058 END TEST rpc 00:05:06.059 ************************************ 00:05:06.059 08:03:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:06.059 08:03:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.059 08:03:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.059 08:03:48 -- common/autotest_common.sh@10 -- # set +x 00:05:06.059 ************************************ 00:05:06.059 START TEST skip_rpc 00:05:06.059 ************************************ 00:05:06.059 08:03:48 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:06.317 * Looking for test storage... 00:05:06.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:06.317 08:03:48 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.317 08:03:48 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.317 08:03:48 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.318 08:03:48 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.318 08:03:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:06.318 08:03:48 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.318 08:03:48 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.318 --rc genhtml_branch_coverage=1 00:05:06.318 --rc genhtml_function_coverage=1 00:05:06.318 --rc genhtml_legend=1 00:05:06.318 --rc geninfo_all_blocks=1 00:05:06.318 --rc geninfo_unexecuted_blocks=1 00:05:06.318 00:05:06.318 ' 00:05:06.318 08:03:48 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.318 --rc genhtml_branch_coverage=1 00:05:06.318 --rc genhtml_function_coverage=1 00:05:06.318 --rc genhtml_legend=1 00:05:06.318 --rc geninfo_all_blocks=1 00:05:06.318 --rc geninfo_unexecuted_blocks=1 00:05:06.318 00:05:06.318 ' 00:05:06.318 08:03:48 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.318 --rc genhtml_branch_coverage=1 00:05:06.318 --rc genhtml_function_coverage=1 00:05:06.318 --rc genhtml_legend=1 00:05:06.318 --rc geninfo_all_blocks=1 00:05:06.318 --rc geninfo_unexecuted_blocks=1 00:05:06.318 00:05:06.318 ' 00:05:06.318 08:03:48 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.318 --rc genhtml_branch_coverage=1 00:05:06.318 --rc genhtml_function_coverage=1 00:05:06.318 --rc genhtml_legend=1 00:05:06.318 --rc geninfo_all_blocks=1 00:05:06.318 --rc geninfo_unexecuted_blocks=1 00:05:06.318 00:05:06.318 ' 00:05:06.318 08:03:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.318 08:03:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:06.318 08:03:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:06.318 08:03:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.318 08:03:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.318 08:03:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.318 ************************************ 00:05:06.318 START TEST skip_rpc 00:05:06.318 ************************************ 00:05:06.318 08:03:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:06.318 08:03:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1168150 00:05:06.318 08:03:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:06.318 08:03:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.318 08:03:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:06.318 [2024-11-28 08:03:48.559532] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:06.318 [2024-11-28 08:03:48.559569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168150 ] 00:05:06.576 [2024-11-28 08:03:48.619340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.576 [2024-11-28 08:03:48.659619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1168150 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1168150 ']' 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1168150 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1168150 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1168150' 00:05:11.842 killing process with pid 1168150 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1168150 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1168150 00:05:11.842 00:05:11.842 real 0m5.370s 00:05:11.842 user 0m5.149s 00:05:11.842 sys 0m0.262s 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.842 08:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.842 ************************************ 00:05:11.842 END TEST skip_rpc 00:05:11.842 ************************************ 00:05:11.842 08:03:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:11.842 08:03:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.842 08:03:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.842 08:03:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.842 ************************************ 00:05:11.842 START TEST skip_rpc_with_json 00:05:11.842 ************************************ 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1169094 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1169094 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1169094 ']' 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.842 08:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.842 [2024-11-28 08:03:53.997309] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:11.842 [2024-11-28 08:03:53.997351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169094 ] 00:05:11.842 [2024-11-28 08:03:54.058560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.842 [2024-11-28 08:03:54.101256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.102 [2024-11-28 08:03:54.313899] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:12.102 request: 00:05:12.102 { 00:05:12.102 "trtype": "tcp", 00:05:12.102 "method": "nvmf_get_transports", 00:05:12.102 "req_id": 1 00:05:12.102 } 00:05:12.102 Got JSON-RPC error response 00:05:12.102 response: 00:05:12.102 { 00:05:12.102 "code": -19, 00:05:12.102 "message": "No such device" 00:05:12.102 } 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.102 [2024-11-28 08:03:54.326011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.102 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.362 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.362 08:03:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:12.362 { 00:05:12.362 "subsystems": [ 00:05:12.362 { 00:05:12.362 "subsystem": "fsdev", 00:05:12.362 "config": [ 00:05:12.362 { 00:05:12.362 "method": "fsdev_set_opts", 00:05:12.362 "params": { 00:05:12.362 "fsdev_io_pool_size": 65535, 00:05:12.362 "fsdev_io_cache_size": 256 00:05:12.362 } 00:05:12.362 } 00:05:12.362 ] 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "subsystem": "vfio_user_target", 00:05:12.362 "config": null 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "subsystem": "keyring", 00:05:12.362 "config": [] 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "subsystem": "iobuf", 00:05:12.362 "config": [ 00:05:12.362 { 00:05:12.362 "method": "iobuf_set_options", 00:05:12.362 "params": { 00:05:12.362 "small_pool_count": 8192, 00:05:12.362 "large_pool_count": 1024, 00:05:12.362 "small_bufsize": 8192, 00:05:12.362 "large_bufsize": 135168, 00:05:12.362 "enable_numa": false 00:05:12.362 } 00:05:12.362 } 00:05:12.362 ] 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "subsystem": "sock", 00:05:12.362 "config": [ 00:05:12.362 { 00:05:12.362 "method": "sock_set_default_impl", 00:05:12.362 "params": { 00:05:12.362 "impl_name": "posix" 00:05:12.362 } 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "method": "sock_impl_set_options", 00:05:12.362 "params": { 00:05:12.362 "impl_name": "ssl", 00:05:12.362 "recv_buf_size": 4096, 00:05:12.362 "send_buf_size": 4096, 00:05:12.362 "enable_recv_pipe": true, 00:05:12.362 "enable_quickack": false, 00:05:12.362 "enable_placement_id": 0, 00:05:12.362 "enable_zerocopy_send_server": true, 00:05:12.362 "enable_zerocopy_send_client": false, 00:05:12.362 "zerocopy_threshold": 0, 00:05:12.362 "tls_version": 0, 00:05:12.362 "enable_ktls": false 00:05:12.362 } 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "method": "sock_impl_set_options", 00:05:12.362 "params": { 00:05:12.362 "impl_name": "posix", 00:05:12.362 "recv_buf_size": 2097152, 00:05:12.362 "send_buf_size": 2097152, 00:05:12.362 "enable_recv_pipe": true, 00:05:12.362 "enable_quickack": false, 00:05:12.362 "enable_placement_id": 0, 00:05:12.362 "enable_zerocopy_send_server": true, 00:05:12.362 "enable_zerocopy_send_client": false, 00:05:12.362 "zerocopy_threshold": 0, 00:05:12.362 "tls_version": 0, 00:05:12.362 "enable_ktls": false 00:05:12.362 } 00:05:12.362 } 00:05:12.362 ] 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "subsystem": "vmd", 00:05:12.362 "config": [] 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "subsystem": "accel", 00:05:12.362 "config": [ 00:05:12.362 { 00:05:12.362 "method": "accel_set_options", 00:05:12.362 "params": { 00:05:12.362 "small_cache_size": 128, 00:05:12.362 "large_cache_size": 16, 00:05:12.362 "task_count": 2048, 00:05:12.362 "sequence_count": 2048, 00:05:12.362 "buf_count": 2048 00:05:12.362 } 00:05:12.362 } 00:05:12.362 ] 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "subsystem": "bdev", 00:05:12.362 "config": [ 00:05:12.362 { 00:05:12.362 "method": "bdev_set_options", 00:05:12.362 "params": { 00:05:12.362 "bdev_io_pool_size": 65535, 00:05:12.362 "bdev_io_cache_size": 256, 00:05:12.362 "bdev_auto_examine": true, 00:05:12.362 "iobuf_small_cache_size": 128, 00:05:12.362 "iobuf_large_cache_size": 16 00:05:12.362 } 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "method": "bdev_raid_set_options", 00:05:12.362 "params": { 00:05:12.362 "process_window_size_kb": 1024, 00:05:12.362 "process_max_bandwidth_mb_sec": 0 00:05:12.362 } 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "method": "bdev_iscsi_set_options", 00:05:12.362 "params": { 00:05:12.362 "timeout_sec": 30 00:05:12.362 } 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "method": "bdev_nvme_set_options", 00:05:12.362 "params": { 00:05:12.362 "action_on_timeout": "none", 00:05:12.362 "timeout_us": 0, 00:05:12.362 "timeout_admin_us": 0, 00:05:12.362 "keep_alive_timeout_ms": 10000, 00:05:12.362 "arbitration_burst": 0, 00:05:12.362 "low_priority_weight": 0, 00:05:12.362 "medium_priority_weight": 0, 00:05:12.362 "high_priority_weight": 0, 00:05:12.362 "nvme_adminq_poll_period_us": 10000, 00:05:12.362 "nvme_ioq_poll_period_us": 0, 00:05:12.362 "io_queue_requests": 0, 00:05:12.362 "delay_cmd_submit": true, 00:05:12.362 "transport_retry_count": 4, 00:05:12.362 "bdev_retry_count": 3, 00:05:12.362 "transport_ack_timeout": 0, 00:05:12.362 "ctrlr_loss_timeout_sec": 0, 00:05:12.362 "reconnect_delay_sec": 0, 00:05:12.362 "fast_io_fail_timeout_sec": 0, 00:05:12.362 "disable_auto_failback": false, 00:05:12.362 "generate_uuids": false, 00:05:12.362 "transport_tos": 0, 00:05:12.362 "nvme_error_stat": false, 00:05:12.362 "rdma_srq_size": 0, 00:05:12.362 "io_path_stat": false, 00:05:12.362 "allow_accel_sequence": false, 00:05:12.362 "rdma_max_cq_size": 0, 00:05:12.362 "rdma_cm_event_timeout_ms": 0, 00:05:12.362 "dhchap_digests": [ 00:05:12.362 "sha256", 00:05:12.362 "sha384", 00:05:12.362 "sha512" 00:05:12.362 ], 00:05:12.362 "dhchap_dhgroups": [ 00:05:12.362 "null", 00:05:12.362 "ffdhe2048", 00:05:12.362 "ffdhe3072", 00:05:12.362 "ffdhe4096", 00:05:12.362 "ffdhe6144", 00:05:12.362 "ffdhe8192" 00:05:12.362 ] 00:05:12.362 } 00:05:12.362 }, 00:05:12.362 { 00:05:12.362 "method": "bdev_nvme_set_hotplug", 00:05:12.362 "params": { 00:05:12.362 "period_us": 100000, 00:05:12.362 "enable": false 00:05:12.362 } 00:05:12.362 }, 00:05:12.363 { 00:05:12.363 "method": "bdev_wait_for_examine" 00:05:12.363 } 00:05:12.363 ] 00:05:12.363 }, 00:05:12.363 { 00:05:12.363 "subsystem": "scsi", 00:05:12.363 "config": null 00:05:12.363 }, 00:05:12.363 { 00:05:12.363 "subsystem": "scheduler", 00:05:12.363 "config": [ 00:05:12.363 { 00:05:12.363 "method": "framework_set_scheduler", 00:05:12.363 "params": { 00:05:12.363 "name": "static" 00:05:12.363 } 00:05:12.363 } 00:05:12.363 ] 00:05:12.363 }, 00:05:12.363 { 00:05:12.363 "subsystem": "vhost_scsi", 00:05:12.363 "config": [] 00:05:12.363 }, 00:05:12.363 { 00:05:12.363 "subsystem": "vhost_blk", 00:05:12.363 "config": [] 00:05:12.363 }, 00:05:12.363 { 00:05:12.363 "subsystem": "ublk", 00:05:12.363 "config": [] 00:05:12.363 }, 00:05:12.363 { 00:05:12.363 "subsystem": "nbd", 00:05:12.363 "config": [] 00:05:12.363 }, 00:05:12.363 { 00:05:12.363 "subsystem": "nvmf", 00:05:12.363 "config": [ 00:05:12.363 { 00:05:12.363 "method": "nvmf_set_config", 00:05:12.363 "params": { 00:05:12.363 "discovery_filter": "match_any", 00:05:12.363 "admin_cmd_passthru": { 00:05:12.363 "identify_ctrlr": false 00:05:12.363 }, 00:05:12.363 "dhchap_digests": [ 00:05:12.363 "sha256", 00:05:12.363 "sha384", 00:05:12.363 "sha512" 00:05:12.363 ], 00:05:12.363 "dhchap_dhgroups": [ 00:05:12.363 "null", 00:05:12.363 "ffdhe2048", 00:05:12.363 "ffdhe3072", 00:05:12.363 "ffdhe4096", 00:05:12.363 "ffdhe6144", 00:05:12.363 "ffdhe8192" 00:05:12.363 ] 00:05:12.363 } 00:05:12.363 }, 00:05:12.363 { 00:05:12.363 "method": "nvmf_set_max_subsystems", 00:05:12.363 "params": { 00:05:12.363 "max_subsystems": 1024 00:05:12.363 } 00:05:12.363 }, 00:05:12.363 { 00:05:12.363 "method": "nvmf_set_crdt", 00:05:12.363 "params": { 00:05:12.363 "crdt1": 0, 00:05:12.363 "crdt2": 0, 00:05:12.363 "crdt3": 0 00:05:12.363 } 00:05:12.363 }, 00:05:12.363 { 00:05:12.363 "method": "nvmf_create_transport", 00:05:12.363 "params": { 00:05:12.363 "trtype": "TCP", 00:05:12.363 "max_queue_depth": 128, 00:05:12.363 "max_io_qpairs_per_ctrlr": 127, 00:05:12.363 "in_capsule_data_size": 4096, 00:05:12.363 "max_io_size": 131072, 00:05:12.363 "io_unit_size": 131072, 00:05:12.363 "max_aq_depth": 128, 00:05:12.363 "num_shared_buffers": 511, 00:05:12.363 "buf_cache_size": 4294967295, 00:05:12.363 "dif_insert_or_strip": false, 00:05:12.363 "zcopy": false, 00:05:12.363 "c2h_success": true, 00:05:12.363 "sock_priority": 0, 00:05:12.363 "abort_timeout_sec": 1, 00:05:12.363 "ack_timeout": 0, 00:05:12.363 "data_wr_pool_size": 0 00:05:12.363 } 00:05:12.363 } 00:05:12.363 ] 00:05:12.363 }, 00:05:12.363 { 00:05:12.363 "subsystem": "iscsi", 00:05:12.363 "config": [ 00:05:12.363 { 00:05:12.363 "method": "iscsi_set_options", 00:05:12.363 "params": { 00:05:12.363 "node_base": "iqn.2016-06.io.spdk", 00:05:12.363 "max_sessions": 128, 00:05:12.363 "max_connections_per_session": 2, 00:05:12.363 "max_queue_depth": 64, 00:05:12.363 "default_time2wait": 2, 00:05:12.363 "default_time2retain": 20, 00:05:12.363 "first_burst_length": 8192, 00:05:12.363 "immediate_data": true, 00:05:12.363 "allow_duplicated_isid": false, 00:05:12.363 "error_recovery_level": 0, 00:05:12.363 "nop_timeout": 60, 00:05:12.363 "nop_in_interval": 30, 00:05:12.363 "disable_chap": false, 00:05:12.363 "require_chap": false, 00:05:12.363 "mutual_chap": false, 00:05:12.363 "chap_group": 0, 00:05:12.363 "max_large_datain_per_connection": 64, 00:05:12.363 "max_r2t_per_connection": 4, 00:05:12.363 "pdu_pool_size": 36864, 00:05:12.363 "immediate_data_pool_size": 16384, 00:05:12.363 "data_out_pool_size": 2048 00:05:12.363 } 00:05:12.363 } 00:05:12.363 ] 00:05:12.363 } 00:05:12.363 ] 00:05:12.363 } 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1169094 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1169094 ']' 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1169094 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1169094 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1169094' 00:05:12.363 killing process with pid 1169094 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1169094 00:05:12.363 08:03:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1169094 00:05:12.622 08:03:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1169119 00:05:12.622 08:03:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:12.622 08:03:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:17.887 08:03:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1169119 00:05:17.887 08:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1169119 ']' 00:05:17.887 08:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1169119 00:05:17.887 08:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:17.887 08:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.887 08:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1169119 00:05:17.887 08:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.887 08:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.887 08:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1169119' 00:05:17.887 killing process with pid 1169119 00:05:17.887 08:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1169119 00:05:17.887 08:03:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1169119 00:05:18.146 08:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:18.146 08:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:18.146 00:05:18.146 real 0m6.270s 00:05:18.146 user 0m5.989s 00:05:18.146 sys 0m0.574s 00:05:18.146 08:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.146 08:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.146 ************************************ 00:05:18.146 END TEST skip_rpc_with_json 00:05:18.146 ************************************ 00:05:18.146 08:04:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:18.146 08:04:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.146 08:04:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.146 08:04:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.146 ************************************ 00:05:18.146 START TEST skip_rpc_with_delay 00:05:18.146 ************************************ 00:05:18.146 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:18.147 [2024-11-28 08:04:00.320329] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.147 00:05:18.147 real 0m0.055s 00:05:18.147 user 0m0.032s 00:05:18.147 sys 0m0.023s 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.147 08:04:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:18.147 ************************************ 00:05:18.147 END TEST skip_rpc_with_delay 00:05:18.147 ************************************ 00:05:18.147 08:04:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:18.147 08:04:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:18.147 08:04:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:18.147 08:04:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.147 08:04:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.147 08:04:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.147 ************************************ 00:05:18.147 START TEST exit_on_failed_rpc_init 00:05:18.147 ************************************ 00:05:18.147 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:18.147 08:04:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1170096 00:05:18.147 08:04:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1170096 00:05:18.147 08:04:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.147 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1170096 ']' 00:05:18.147 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.147 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.147 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.147 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.147 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.406 [2024-11-28 08:04:00.457821] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:18.407 [2024-11-28 08:04:00.457866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170096 ] 00:05:18.407 [2024-11-28 08:04:00.521257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.407 [2024-11-28 08:04:00.564098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:18.666 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.666 [2024-11-28 08:04:00.840896] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:18.666 [2024-11-28 08:04:00.840944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170309 ] 00:05:18.666 [2024-11-28 08:04:00.900686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.924 [2024-11-28 08:04:00.942415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.924 [2024-11-28 08:04:00.942465] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:18.924 [2024-11-28 08:04:00.942475] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:18.924 [2024-11-28 08:04:00.942483] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1170096 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1170096 ']' 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1170096 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.924 08:04:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1170096 00:05:18.924 08:04:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.924 08:04:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.924 08:04:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1170096' 00:05:18.924 killing process with pid 1170096 00:05:18.924 08:04:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1170096 00:05:18.924 08:04:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1170096 00:05:19.183 00:05:19.183 real 0m0.935s 00:05:19.183 user 0m1.000s 00:05:19.183 sys 0m0.369s 00:05:19.183 08:04:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.183 08:04:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.183 ************************************ 00:05:19.183 END TEST exit_on_failed_rpc_init 00:05:19.183 ************************************ 00:05:19.183 08:04:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.183 00:05:19.183 real 0m13.079s 00:05:19.183 user 0m12.390s 00:05:19.183 sys 0m1.486s 00:05:19.183 08:04:01 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.183 08:04:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.183 ************************************ 00:05:19.183 END TEST skip_rpc 00:05:19.183 ************************************ 00:05:19.183 08:04:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:19.183 08:04:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.183 08:04:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.183 08:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:19.183 ************************************ 00:05:19.183 START TEST rpc_client 00:05:19.183 ************************************ 00:05:19.183 08:04:01 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:19.443 * Looking for test storage... 00:05:19.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:19.443 08:04:01 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.443 08:04:01 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.443 08:04:01 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.443 08:04:01 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.443 08:04:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:19.443 08:04:01 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.443 08:04:01 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.443 --rc genhtml_branch_coverage=1 00:05:19.443 --rc genhtml_function_coverage=1 00:05:19.443 --rc genhtml_legend=1 00:05:19.443 --rc geninfo_all_blocks=1 00:05:19.443 --rc geninfo_unexecuted_blocks=1 00:05:19.443 00:05:19.443 ' 00:05:19.443 08:04:01 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.443 --rc genhtml_branch_coverage=1 00:05:19.443 --rc genhtml_function_coverage=1 00:05:19.443 --rc genhtml_legend=1 00:05:19.443 --rc geninfo_all_blocks=1 00:05:19.443 --rc geninfo_unexecuted_blocks=1 00:05:19.443 00:05:19.443 ' 00:05:19.443 08:04:01 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.443 --rc genhtml_branch_coverage=1 00:05:19.443 --rc genhtml_function_coverage=1 00:05:19.443 --rc genhtml_legend=1 00:05:19.443 --rc geninfo_all_blocks=1 00:05:19.443 --rc geninfo_unexecuted_blocks=1 00:05:19.443 00:05:19.443 ' 00:05:19.443 08:04:01 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.443 --rc genhtml_branch_coverage=1 00:05:19.443 --rc genhtml_function_coverage=1 00:05:19.443 --rc genhtml_legend=1 00:05:19.443 --rc geninfo_all_blocks=1 00:05:19.443 --rc geninfo_unexecuted_blocks=1 00:05:19.443 00:05:19.443 ' 00:05:19.443 08:04:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:19.443 OK 00:05:19.443 08:04:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:19.443 00:05:19.443 real 0m0.175s 00:05:19.443 user 0m0.104s 00:05:19.443 sys 0m0.076s 00:05:19.443 08:04:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.443 08:04:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:19.443 ************************************ 00:05:19.443 END TEST rpc_client 00:05:19.444 ************************************ 00:05:19.444 08:04:01 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.444 08:04:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.444 08:04:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.444 08:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:19.444 ************************************ 00:05:19.444 START TEST json_config 00:05:19.444 ************************************ 00:05:19.444 08:04:01 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.702 08:04:01 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.702 08:04:01 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.702 08:04:01 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.702 08:04:01 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.702 08:04:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.702 08:04:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.702 08:04:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.702 08:04:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.702 08:04:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.702 08:04:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.702 08:04:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.702 08:04:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.702 08:04:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.702 08:04:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.702 08:04:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.702 08:04:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:19.702 08:04:01 json_config -- scripts/common.sh@345 -- # : 1 00:05:19.702 08:04:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.702 08:04:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.702 08:04:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:19.702 08:04:01 json_config -- scripts/common.sh@353 -- # local d=1 00:05:19.702 08:04:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.702 08:04:01 json_config -- scripts/common.sh@355 -- # echo 1 00:05:19.702 08:04:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.702 08:04:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:19.702 08:04:01 json_config -- scripts/common.sh@353 -- # local d=2 00:05:19.702 08:04:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.702 08:04:01 json_config -- scripts/common.sh@355 -- # echo 2 00:05:19.702 08:04:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.702 08:04:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.702 08:04:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.702 08:04:01 json_config -- scripts/common.sh@368 -- # return 0 00:05:19.702 08:04:01 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.702 08:04:01 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.702 --rc genhtml_branch_coverage=1 00:05:19.702 --rc genhtml_function_coverage=1 00:05:19.702 --rc genhtml_legend=1 00:05:19.702 --rc geninfo_all_blocks=1 00:05:19.702 --rc geninfo_unexecuted_blocks=1 00:05:19.702 00:05:19.702 ' 00:05:19.702 08:04:01 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.702 --rc genhtml_branch_coverage=1 00:05:19.702 --rc genhtml_function_coverage=1 00:05:19.702 --rc genhtml_legend=1 00:05:19.702 --rc geninfo_all_blocks=1 00:05:19.702 --rc geninfo_unexecuted_blocks=1 00:05:19.702 00:05:19.702 ' 00:05:19.702 08:04:01 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.702 --rc genhtml_branch_coverage=1 00:05:19.702 --rc genhtml_function_coverage=1 00:05:19.702 --rc genhtml_legend=1 00:05:19.702 --rc geninfo_all_blocks=1 00:05:19.702 --rc geninfo_unexecuted_blocks=1 00:05:19.702 00:05:19.702 ' 00:05:19.702 08:04:01 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.702 --rc genhtml_branch_coverage=1 00:05:19.702 --rc genhtml_function_coverage=1 00:05:19.702 --rc genhtml_legend=1 00:05:19.702 --rc geninfo_all_blocks=1 00:05:19.702 --rc geninfo_unexecuted_blocks=1 00:05:19.702 00:05:19.702 ' 00:05:19.702 08:04:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.702 08:04:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.702 08:04:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.702 08:04:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.702 08:04:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.702 08:04:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.702 08:04:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.702 08:04:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.702 08:04:01 json_config -- paths/export.sh@5 -- # export PATH 00:05:19.702 08:04:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@51 -- # : 0 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.702 08:04:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:19.703 INFO: JSON configuration test init 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:19.703 08:04:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.703 08:04:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:19.703 08:04:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.703 08:04:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.703 08:04:01 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:19.703 08:04:01 json_config -- json_config/common.sh@9 -- # local app=target 00:05:19.703 08:04:01 json_config -- json_config/common.sh@10 -- # shift 00:05:19.703 08:04:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.703 08:04:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.703 08:04:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.703 08:04:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.703 08:04:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.703 08:04:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1170489 00:05:19.703 08:04:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.703 Waiting for target to run... 00:05:19.703 08:04:01 json_config -- json_config/common.sh@25 -- # waitforlisten 1170489 /var/tmp/spdk_tgt.sock 00:05:19.703 08:04:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 1170489 ']' 00:05:19.703 08:04:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.703 08:04:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.703 08:04:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.703 08:04:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:19.703 08:04:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.703 08:04:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.703 [2024-11-28 08:04:01.920251] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:19.703 [2024-11-28 08:04:01.920300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170489 ] 00:05:20.269 [2024-11-28 08:04:02.363463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.269 [2024-11-28 08:04:02.420068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.528 08:04:02 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.528 08:04:02 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:20.528 08:04:02 json_config -- json_config/common.sh@26 -- # echo '' 00:05:20.528 00:05:20.528 08:04:02 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:20.528 08:04:02 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:20.528 08:04:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.528 08:04:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.528 08:04:02 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:20.528 08:04:02 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:20.528 08:04:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.528 08:04:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.528 08:04:02 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:20.528 08:04:02 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:20.528 08:04:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:23.814 08:04:05 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:23.814 08:04:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:23.815 08:04:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.815 08:04:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.815 08:04:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:23.815 08:04:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:23.815 08:04:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:23.815 08:04:05 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:23.815 08:04:05 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:23.815 08:04:05 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:23.815 08:04:05 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:23.815 08:04:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@54 -- # sort 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:24.073 08:04:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.073 08:04:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:24.073 08:04:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.073 08:04:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:24.073 08:04:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:24.073 MallocForNvmf0 00:05:24.073 08:04:06 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:24.073 08:04:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:24.331 MallocForNvmf1 00:05:24.332 08:04:06 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:24.332 08:04:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:24.590 [2024-11-28 08:04:06.700482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.590 08:04:06 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:24.590 08:04:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:24.848 08:04:06 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:24.848 08:04:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:24.848 08:04:07 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:24.848 08:04:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:25.106 08:04:07 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:25.106 08:04:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:25.364 [2024-11-28 08:04:07.406729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:25.364 08:04:07 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:25.364 08:04:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.364 08:04:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.365 08:04:07 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:25.365 08:04:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.365 08:04:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.365 08:04:07 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:25.365 08:04:07 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:25.365 08:04:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:25.622 MallocBdevForConfigChangeCheck 00:05:25.622 08:04:07 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:25.622 08:04:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.622 08:04:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.622 08:04:07 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:25.622 08:04:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.880 08:04:08 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:25.880 INFO: shutting down applications... 00:05:25.880 08:04:08 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:25.880 08:04:08 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:25.880 08:04:08 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:25.880 08:04:08 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:27.781 Calling clear_iscsi_subsystem 00:05:27.781 Calling clear_nvmf_subsystem 00:05:27.781 Calling clear_nbd_subsystem 00:05:27.781 Calling clear_ublk_subsystem 00:05:27.781 Calling clear_vhost_blk_subsystem 00:05:27.781 Calling clear_vhost_scsi_subsystem 00:05:27.781 Calling clear_bdev_subsystem 00:05:27.781 08:04:09 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:27.781 08:04:09 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:27.781 08:04:09 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:27.781 08:04:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.781 08:04:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:27.781 08:04:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:27.781 08:04:09 json_config -- json_config/json_config.sh@352 -- # break 00:05:27.781 08:04:09 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:27.781 08:04:09 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:27.781 08:04:09 json_config -- json_config/common.sh@31 -- # local app=target 00:05:27.781 08:04:09 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.781 08:04:09 json_config -- json_config/common.sh@35 -- # [[ -n 1170489 ]] 00:05:27.781 08:04:09 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1170489 00:05:27.781 08:04:09 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.781 08:04:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.781 08:04:09 json_config -- json_config/common.sh@41 -- # kill -0 1170489 00:05:27.781 08:04:09 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.348 08:04:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.348 08:04:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.348 08:04:10 json_config -- json_config/common.sh@41 -- # kill -0 1170489 00:05:28.348 08:04:10 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.348 08:04:10 json_config -- json_config/common.sh@43 -- # break 00:05:28.348 08:04:10 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.348 08:04:10 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.348 SPDK target shutdown done 00:05:28.348 08:04:10 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:28.348 INFO: relaunching applications... 00:05:28.348 08:04:10 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.348 08:04:10 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.348 08:04:10 json_config -- json_config/common.sh@10 -- # shift 00:05:28.348 08:04:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.348 08:04:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.348 08:04:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.348 08:04:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.348 08:04:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.348 08:04:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1172182 00:05:28.348 08:04:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.348 Waiting for target to run... 00:05:28.348 08:04:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.348 08:04:10 json_config -- json_config/common.sh@25 -- # waitforlisten 1172182 /var/tmp/spdk_tgt.sock 00:05:28.348 08:04:10 json_config -- common/autotest_common.sh@835 -- # '[' -z 1172182 ']' 00:05:28.348 08:04:10 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.348 08:04:10 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.348 08:04:10 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.349 08:04:10 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.349 08:04:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.349 [2024-11-28 08:04:10.554540] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:28.349 [2024-11-28 08:04:10.554600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172182 ] 00:05:28.606 [2024-11-28 08:04:10.849487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.863 [2024-11-28 08:04:10.884430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.146 [2024-11-28 08:04:13.916417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.146 [2024-11-28 08:04:13.948759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.146 08:04:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.146 08:04:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:32.146 08:04:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.146 00:05:32.146 08:04:13 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:32.146 08:04:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:32.146 INFO: Checking if target configuration is the same... 00:05:32.147 08:04:13 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.147 08:04:13 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:32.147 08:04:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.147 + '[' 2 -ne 2 ']' 00:05:32.147 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.147 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:32.147 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.147 +++ basename /dev/fd/62 00:05:32.147 ++ mktemp /tmp/62.XXX 00:05:32.147 + tmp_file_1=/tmp/62.tE3 00:05:32.147 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.147 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.147 + tmp_file_2=/tmp/spdk_tgt_config.json.Sa1 00:05:32.147 + ret=0 00:05:32.147 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.147 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.147 + diff -u /tmp/62.tE3 /tmp/spdk_tgt_config.json.Sa1 00:05:32.147 + echo 'INFO: JSON config files are the same' 00:05:32.147 INFO: JSON config files are the same 00:05:32.147 + rm /tmp/62.tE3 /tmp/spdk_tgt_config.json.Sa1 00:05:32.147 + exit 0 00:05:32.147 08:04:14 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:32.147 08:04:14 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:32.147 INFO: changing configuration and checking if this can be detected... 00:05:32.147 08:04:14 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.147 08:04:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.406 08:04:14 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.406 08:04:14 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:32.406 08:04:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.406 + '[' 2 -ne 2 ']' 00:05:32.406 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.406 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:32.406 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.406 +++ basename /dev/fd/62 00:05:32.406 ++ mktemp /tmp/62.XXX 00:05:32.406 + tmp_file_1=/tmp/62.jJa 00:05:32.406 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.406 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.406 + tmp_file_2=/tmp/spdk_tgt_config.json.0V8 00:05:32.406 + ret=0 00:05:32.406 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.664 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.923 + diff -u /tmp/62.jJa /tmp/spdk_tgt_config.json.0V8 00:05:32.923 + ret=1 00:05:32.923 + echo '=== Start of file: /tmp/62.jJa ===' 00:05:32.923 + cat /tmp/62.jJa 00:05:32.923 + echo '=== End of file: /tmp/62.jJa ===' 00:05:32.923 + echo '' 00:05:32.923 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0V8 ===' 00:05:32.923 + cat /tmp/spdk_tgt_config.json.0V8 00:05:32.923 + echo '=== End of file: /tmp/spdk_tgt_config.json.0V8 ===' 00:05:32.923 + echo '' 00:05:32.923 + rm /tmp/62.jJa /tmp/spdk_tgt_config.json.0V8 00:05:32.923 + exit 1 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:32.923 INFO: configuration change detected. 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:32.923 08:04:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.923 08:04:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@324 -- # [[ -n 1172182 ]] 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:32.923 08:04:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.923 08:04:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:32.923 08:04:14 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:32.923 08:04:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.923 08:04:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.923 08:04:15 json_config -- json_config/json_config.sh@330 -- # killprocess 1172182 00:05:32.923 08:04:15 json_config -- common/autotest_common.sh@954 -- # '[' -z 1172182 ']' 00:05:32.923 08:04:15 json_config -- common/autotest_common.sh@958 -- # kill -0 1172182 00:05:32.923 08:04:15 json_config -- common/autotest_common.sh@959 -- # uname 00:05:32.923 08:04:15 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.923 08:04:15 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1172182 00:05:32.923 08:04:15 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.923 08:04:15 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.923 08:04:15 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1172182' 00:05:32.923 killing process with pid 1172182 00:05:32.923 08:04:15 json_config -- common/autotest_common.sh@973 -- # kill 1172182 00:05:32.923 08:04:15 json_config -- common/autotest_common.sh@978 -- # wait 1172182 00:05:34.825 08:04:16 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.825 08:04:16 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:34.825 08:04:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.825 08:04:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.825 08:04:16 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:34.825 08:04:16 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:34.825 INFO: Success 00:05:34.825 00:05:34.825 real 0m14.931s 00:05:34.825 user 0m15.181s 00:05:34.825 sys 0m2.530s 00:05:34.825 08:04:16 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.825 08:04:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.825 ************************************ 00:05:34.825 END TEST json_config 00:05:34.825 ************************************ 00:05:34.825 08:04:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.825 08:04:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.825 08:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.825 08:04:16 -- common/autotest_common.sh@10 -- # set +x 00:05:34.825 ************************************ 00:05:34.825 START TEST json_config_extra_key 00:05:34.825 ************************************ 00:05:34.825 08:04:16 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.825 08:04:16 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.825 08:04:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.825 08:04:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.825 08:04:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.825 08:04:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:34.825 08:04:16 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.825 08:04:16 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.825 --rc genhtml_branch_coverage=1 00:05:34.825 --rc genhtml_function_coverage=1 00:05:34.825 --rc genhtml_legend=1 00:05:34.825 --rc geninfo_all_blocks=1 00:05:34.825 --rc geninfo_unexecuted_blocks=1 00:05:34.825 00:05:34.825 ' 00:05:34.825 08:04:16 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.825 --rc genhtml_branch_coverage=1 00:05:34.825 --rc genhtml_function_coverage=1 00:05:34.825 --rc genhtml_legend=1 00:05:34.825 --rc geninfo_all_blocks=1 00:05:34.825 --rc geninfo_unexecuted_blocks=1 00:05:34.825 00:05:34.825 ' 00:05:34.825 08:04:16 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.825 --rc genhtml_branch_coverage=1 00:05:34.825 --rc genhtml_function_coverage=1 00:05:34.825 --rc genhtml_legend=1 00:05:34.825 --rc geninfo_all_blocks=1 00:05:34.825 --rc geninfo_unexecuted_blocks=1 00:05:34.825 00:05:34.825 ' 00:05:34.825 08:04:16 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.825 --rc genhtml_branch_coverage=1 00:05:34.825 --rc genhtml_function_coverage=1 00:05:34.825 --rc genhtml_legend=1 00:05:34.826 --rc geninfo_all_blocks=1 00:05:34.826 --rc geninfo_unexecuted_blocks=1 00:05:34.826 00:05:34.826 ' 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:34.826 08:04:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.826 08:04:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.826 08:04:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.826 08:04:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.826 08:04:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.826 08:04:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.826 08:04:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.826 08:04:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:34.826 08:04:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.826 08:04:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:34.826 INFO: launching applications... 00:05:34.826 08:04:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.826 08:04:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:34.826 08:04:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:34.826 08:04:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.826 08:04:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.826 08:04:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.826 08:04:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.826 08:04:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.826 08:04:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1173253 00:05:34.826 08:04:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.826 Waiting for target to run... 00:05:34.826 08:04:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1173253 /var/tmp/spdk_tgt.sock 00:05:34.826 08:04:16 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1173253 ']' 00:05:34.826 08:04:16 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.826 08:04:16 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.826 08:04:16 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.826 08:04:16 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.826 08:04:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.826 08:04:16 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.826 [2024-11-28 08:04:16.884058] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:34.826 [2024-11-28 08:04:16.884112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173253 ] 00:05:35.085 [2024-11-28 08:04:17.157072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.085 [2024-11-28 08:04:17.191454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.653 08:04:17 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.653 08:04:17 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:35.653 08:04:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:35.653 00:05:35.653 08:04:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:35.653 INFO: shutting down applications... 00:05:35.653 08:04:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:35.653 08:04:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:35.653 08:04:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.653 08:04:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1173253 ]] 00:05:35.653 08:04:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1173253 00:05:35.653 08:04:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.653 08:04:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.653 08:04:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1173253 00:05:35.653 08:04:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.222 08:04:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.222 08:04:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.222 08:04:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1173253 00:05:36.222 08:04:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.222 08:04:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:36.222 08:04:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.222 08:04:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.222 SPDK target shutdown done 00:05:36.222 08:04:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:36.222 Success 00:05:36.222 00:05:36.222 real 0m1.541s 00:05:36.223 user 0m1.354s 00:05:36.223 sys 0m0.365s 00:05:36.223 08:04:18 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.223 08:04:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.223 ************************************ 00:05:36.223 END TEST json_config_extra_key 00:05:36.223 ************************************ 00:05:36.223 08:04:18 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.223 08:04:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.223 08:04:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.223 08:04:18 -- common/autotest_common.sh@10 -- # set +x 00:05:36.223 ************************************ 00:05:36.223 START TEST alias_rpc 00:05:36.223 ************************************ 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.223 * Looking for test storage... 00:05:36.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.223 08:04:18 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.223 --rc genhtml_branch_coverage=1 00:05:36.223 --rc genhtml_function_coverage=1 00:05:36.223 --rc genhtml_legend=1 00:05:36.223 --rc geninfo_all_blocks=1 00:05:36.223 --rc geninfo_unexecuted_blocks=1 00:05:36.223 00:05:36.223 ' 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.223 --rc genhtml_branch_coverage=1 00:05:36.223 --rc genhtml_function_coverage=1 00:05:36.223 --rc genhtml_legend=1 00:05:36.223 --rc geninfo_all_blocks=1 00:05:36.223 --rc geninfo_unexecuted_blocks=1 00:05:36.223 00:05:36.223 ' 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.223 --rc genhtml_branch_coverage=1 00:05:36.223 --rc genhtml_function_coverage=1 00:05:36.223 --rc genhtml_legend=1 00:05:36.223 --rc geninfo_all_blocks=1 00:05:36.223 --rc geninfo_unexecuted_blocks=1 00:05:36.223 00:05:36.223 ' 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.223 --rc genhtml_branch_coverage=1 00:05:36.223 --rc genhtml_function_coverage=1 00:05:36.223 --rc genhtml_legend=1 00:05:36.223 --rc geninfo_all_blocks=1 00:05:36.223 --rc geninfo_unexecuted_blocks=1 00:05:36.223 00:05:36.223 ' 00:05:36.223 08:04:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:36.223 08:04:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1173694 00:05:36.223 08:04:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1173694 00:05:36.223 08:04:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1173694 ']' 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.223 08:04:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.483 [2024-11-28 08:04:18.493194] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:36.483 [2024-11-28 08:04:18.493247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173694 ] 00:05:36.483 [2024-11-28 08:04:18.555648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.483 [2024-11-28 08:04:18.598289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.743 08:04:18 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.743 08:04:18 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.743 08:04:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:37.003 08:04:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1173694 00:05:37.003 08:04:19 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1173694 ']' 00:05:37.003 08:04:19 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1173694 00:05:37.003 08:04:19 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:37.003 08:04:19 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.003 08:04:19 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1173694 00:05:37.003 08:04:19 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.003 08:04:19 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.003 08:04:19 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1173694' 00:05:37.003 killing process with pid 1173694 00:05:37.003 08:04:19 alias_rpc -- common/autotest_common.sh@973 -- # kill 1173694 00:05:37.003 08:04:19 alias_rpc -- common/autotest_common.sh@978 -- # wait 1173694 00:05:37.263 00:05:37.263 real 0m1.093s 00:05:37.263 user 0m1.135s 00:05:37.263 sys 0m0.372s 00:05:37.263 08:04:19 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.263 08:04:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.263 ************************************ 00:05:37.263 END TEST alias_rpc 00:05:37.263 ************************************ 00:05:37.263 08:04:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:37.263 08:04:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.263 08:04:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.263 08:04:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.263 08:04:19 -- common/autotest_common.sh@10 -- # set +x 00:05:37.263 ************************************ 00:05:37.263 START TEST spdkcli_tcp 00:05:37.263 ************************************ 00:05:37.263 08:04:19 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.263 * Looking for test storage... 00:05:37.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:37.263 08:04:19 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.263 08:04:19 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.263 08:04:19 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.523 08:04:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.523 --rc genhtml_branch_coverage=1 00:05:37.523 --rc genhtml_function_coverage=1 00:05:37.523 --rc genhtml_legend=1 00:05:37.523 --rc geninfo_all_blocks=1 00:05:37.523 --rc geninfo_unexecuted_blocks=1 00:05:37.523 00:05:37.523 ' 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.523 --rc genhtml_branch_coverage=1 00:05:37.523 --rc genhtml_function_coverage=1 00:05:37.523 --rc genhtml_legend=1 00:05:37.523 --rc geninfo_all_blocks=1 00:05:37.523 --rc geninfo_unexecuted_blocks=1 00:05:37.523 00:05:37.523 ' 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.523 --rc genhtml_branch_coverage=1 00:05:37.523 --rc genhtml_function_coverage=1 00:05:37.523 --rc genhtml_legend=1 00:05:37.523 --rc geninfo_all_blocks=1 00:05:37.523 --rc geninfo_unexecuted_blocks=1 00:05:37.523 00:05:37.523 ' 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.523 --rc genhtml_branch_coverage=1 00:05:37.523 --rc genhtml_function_coverage=1 00:05:37.523 --rc genhtml_legend=1 00:05:37.523 --rc geninfo_all_blocks=1 00:05:37.523 --rc geninfo_unexecuted_blocks=1 00:05:37.523 00:05:37.523 ' 00:05:37.523 08:04:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:37.523 08:04:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:37.523 08:04:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:37.523 08:04:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:37.523 08:04:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:37.523 08:04:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:37.523 08:04:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.523 08:04:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1173856 00:05:37.523 08:04:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:37.523 08:04:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1173856 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1173856 ']' 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.523 08:04:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.524 [2024-11-28 08:04:19.663416] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:37.524 [2024-11-28 08:04:19.663466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173856 ] 00:05:37.524 [2024-11-28 08:04:19.727020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.524 [2024-11-28 08:04:19.768954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.524 [2024-11-28 08:04:19.768956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.783 08:04:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.783 08:04:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:37.783 08:04:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1174041 00:05:37.783 08:04:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:37.783 08:04:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:38.042 [ 00:05:38.042 "bdev_malloc_delete", 00:05:38.042 "bdev_malloc_create", 00:05:38.042 "bdev_null_resize", 00:05:38.042 "bdev_null_delete", 00:05:38.042 "bdev_null_create", 00:05:38.042 "bdev_nvme_cuse_unregister", 00:05:38.042 "bdev_nvme_cuse_register", 00:05:38.042 "bdev_opal_new_user", 00:05:38.042 "bdev_opal_set_lock_state", 00:05:38.042 "bdev_opal_delete", 00:05:38.042 "bdev_opal_get_info", 00:05:38.042 "bdev_opal_create", 00:05:38.042 "bdev_nvme_opal_revert", 00:05:38.042 "bdev_nvme_opal_init", 00:05:38.042 "bdev_nvme_send_cmd", 00:05:38.042 "bdev_nvme_set_keys", 00:05:38.042 "bdev_nvme_get_path_iostat", 00:05:38.042 "bdev_nvme_get_mdns_discovery_info", 00:05:38.042 "bdev_nvme_stop_mdns_discovery", 00:05:38.042 "bdev_nvme_start_mdns_discovery", 00:05:38.042 "bdev_nvme_set_multipath_policy", 00:05:38.042 "bdev_nvme_set_preferred_path", 00:05:38.042 "bdev_nvme_get_io_paths", 00:05:38.042 "bdev_nvme_remove_error_injection", 00:05:38.042 "bdev_nvme_add_error_injection", 00:05:38.042 "bdev_nvme_get_discovery_info", 00:05:38.042 "bdev_nvme_stop_discovery", 00:05:38.042 "bdev_nvme_start_discovery", 00:05:38.042 "bdev_nvme_get_controller_health_info", 00:05:38.042 "bdev_nvme_disable_controller", 00:05:38.042 "bdev_nvme_enable_controller", 00:05:38.042 "bdev_nvme_reset_controller", 00:05:38.042 "bdev_nvme_get_transport_statistics", 00:05:38.042 "bdev_nvme_apply_firmware", 00:05:38.042 "bdev_nvme_detach_controller", 00:05:38.042 "bdev_nvme_get_controllers", 00:05:38.042 "bdev_nvme_attach_controller", 00:05:38.042 "bdev_nvme_set_hotplug", 00:05:38.042 "bdev_nvme_set_options", 00:05:38.042 "bdev_passthru_delete", 00:05:38.042 "bdev_passthru_create", 00:05:38.042 "bdev_lvol_set_parent_bdev", 00:05:38.042 "bdev_lvol_set_parent", 00:05:38.042 "bdev_lvol_check_shallow_copy", 00:05:38.042 "bdev_lvol_start_shallow_copy", 00:05:38.042 "bdev_lvol_grow_lvstore", 00:05:38.042 "bdev_lvol_get_lvols", 00:05:38.042 "bdev_lvol_get_lvstores", 00:05:38.042 "bdev_lvol_delete", 00:05:38.042 "bdev_lvol_set_read_only", 00:05:38.042 "bdev_lvol_resize", 00:05:38.042 "bdev_lvol_decouple_parent", 00:05:38.042 "bdev_lvol_inflate", 00:05:38.042 "bdev_lvol_rename", 00:05:38.042 "bdev_lvol_clone_bdev", 00:05:38.042 "bdev_lvol_clone", 00:05:38.042 "bdev_lvol_snapshot", 00:05:38.042 "bdev_lvol_create", 00:05:38.042 "bdev_lvol_delete_lvstore", 00:05:38.042 "bdev_lvol_rename_lvstore", 00:05:38.042 "bdev_lvol_create_lvstore", 00:05:38.042 "bdev_raid_set_options", 00:05:38.042 "bdev_raid_remove_base_bdev", 00:05:38.042 "bdev_raid_add_base_bdev", 00:05:38.042 "bdev_raid_delete", 00:05:38.042 "bdev_raid_create", 00:05:38.042 "bdev_raid_get_bdevs", 00:05:38.042 "bdev_error_inject_error", 00:05:38.042 "bdev_error_delete", 00:05:38.042 "bdev_error_create", 00:05:38.042 "bdev_split_delete", 00:05:38.042 "bdev_split_create", 00:05:38.042 "bdev_delay_delete", 00:05:38.042 "bdev_delay_create", 00:05:38.042 "bdev_delay_update_latency", 00:05:38.042 "bdev_zone_block_delete", 00:05:38.042 "bdev_zone_block_create", 00:05:38.042 "blobfs_create", 00:05:38.042 "blobfs_detect", 00:05:38.042 "blobfs_set_cache_size", 00:05:38.042 "bdev_aio_delete", 00:05:38.042 "bdev_aio_rescan", 00:05:38.042 "bdev_aio_create", 00:05:38.042 "bdev_ftl_set_property", 00:05:38.042 "bdev_ftl_get_properties", 00:05:38.042 "bdev_ftl_get_stats", 00:05:38.043 "bdev_ftl_unmap", 00:05:38.043 "bdev_ftl_unload", 00:05:38.043 "bdev_ftl_delete", 00:05:38.043 "bdev_ftl_load", 00:05:38.043 "bdev_ftl_create", 00:05:38.043 "bdev_virtio_attach_controller", 00:05:38.043 "bdev_virtio_scsi_get_devices", 00:05:38.043 "bdev_virtio_detach_controller", 00:05:38.043 "bdev_virtio_blk_set_hotplug", 00:05:38.043 "bdev_iscsi_delete", 00:05:38.043 "bdev_iscsi_create", 00:05:38.043 "bdev_iscsi_set_options", 00:05:38.043 "accel_error_inject_error", 00:05:38.043 "ioat_scan_accel_module", 00:05:38.043 "dsa_scan_accel_module", 00:05:38.043 "iaa_scan_accel_module", 00:05:38.043 "vfu_virtio_create_fs_endpoint", 00:05:38.043 "vfu_virtio_create_scsi_endpoint", 00:05:38.043 "vfu_virtio_scsi_remove_target", 00:05:38.043 "vfu_virtio_scsi_add_target", 00:05:38.043 "vfu_virtio_create_blk_endpoint", 00:05:38.043 "vfu_virtio_delete_endpoint", 00:05:38.043 "keyring_file_remove_key", 00:05:38.043 "keyring_file_add_key", 00:05:38.043 "keyring_linux_set_options", 00:05:38.043 "fsdev_aio_delete", 00:05:38.043 "fsdev_aio_create", 00:05:38.043 "iscsi_get_histogram", 00:05:38.043 "iscsi_enable_histogram", 00:05:38.043 "iscsi_set_options", 00:05:38.043 "iscsi_get_auth_groups", 00:05:38.043 "iscsi_auth_group_remove_secret", 00:05:38.043 "iscsi_auth_group_add_secret", 00:05:38.043 "iscsi_delete_auth_group", 00:05:38.043 "iscsi_create_auth_group", 00:05:38.043 "iscsi_set_discovery_auth", 00:05:38.043 "iscsi_get_options", 00:05:38.043 "iscsi_target_node_request_logout", 00:05:38.043 "iscsi_target_node_set_redirect", 00:05:38.043 "iscsi_target_node_set_auth", 00:05:38.043 "iscsi_target_node_add_lun", 00:05:38.043 "iscsi_get_stats", 00:05:38.043 "iscsi_get_connections", 00:05:38.043 "iscsi_portal_group_set_auth", 00:05:38.043 "iscsi_start_portal_group", 00:05:38.043 "iscsi_delete_portal_group", 00:05:38.043 "iscsi_create_portal_group", 00:05:38.043 "iscsi_get_portal_groups", 00:05:38.043 "iscsi_delete_target_node", 00:05:38.043 "iscsi_target_node_remove_pg_ig_maps", 00:05:38.043 "iscsi_target_node_add_pg_ig_maps", 00:05:38.043 "iscsi_create_target_node", 00:05:38.043 "iscsi_get_target_nodes", 00:05:38.043 "iscsi_delete_initiator_group", 00:05:38.043 "iscsi_initiator_group_remove_initiators", 00:05:38.043 "iscsi_initiator_group_add_initiators", 00:05:38.043 "iscsi_create_initiator_group", 00:05:38.043 "iscsi_get_initiator_groups", 00:05:38.043 "nvmf_set_crdt", 00:05:38.043 "nvmf_set_config", 00:05:38.043 "nvmf_set_max_subsystems", 00:05:38.043 "nvmf_stop_mdns_prr", 00:05:38.043 "nvmf_publish_mdns_prr", 00:05:38.043 "nvmf_subsystem_get_listeners", 00:05:38.043 "nvmf_subsystem_get_qpairs", 00:05:38.043 "nvmf_subsystem_get_controllers", 00:05:38.043 "nvmf_get_stats", 00:05:38.043 "nvmf_get_transports", 00:05:38.043 "nvmf_create_transport", 00:05:38.043 "nvmf_get_targets", 00:05:38.043 "nvmf_delete_target", 00:05:38.043 "nvmf_create_target", 00:05:38.043 "nvmf_subsystem_allow_any_host", 00:05:38.043 "nvmf_subsystem_set_keys", 00:05:38.043 "nvmf_subsystem_remove_host", 00:05:38.043 "nvmf_subsystem_add_host", 00:05:38.043 "nvmf_ns_remove_host", 00:05:38.043 "nvmf_ns_add_host", 00:05:38.043 "nvmf_subsystem_remove_ns", 00:05:38.043 "nvmf_subsystem_set_ns_ana_group", 00:05:38.043 "nvmf_subsystem_add_ns", 00:05:38.043 "nvmf_subsystem_listener_set_ana_state", 00:05:38.043 "nvmf_discovery_get_referrals", 00:05:38.043 "nvmf_discovery_remove_referral", 00:05:38.043 "nvmf_discovery_add_referral", 00:05:38.043 "nvmf_subsystem_remove_listener", 00:05:38.043 "nvmf_subsystem_add_listener", 00:05:38.043 "nvmf_delete_subsystem", 00:05:38.043 "nvmf_create_subsystem", 00:05:38.043 "nvmf_get_subsystems", 00:05:38.043 "env_dpdk_get_mem_stats", 00:05:38.043 "nbd_get_disks", 00:05:38.043 "nbd_stop_disk", 00:05:38.043 "nbd_start_disk", 00:05:38.043 "ublk_recover_disk", 00:05:38.043 "ublk_get_disks", 00:05:38.043 "ublk_stop_disk", 00:05:38.043 "ublk_start_disk", 00:05:38.043 "ublk_destroy_target", 00:05:38.043 "ublk_create_target", 00:05:38.043 "virtio_blk_create_transport", 00:05:38.043 "virtio_blk_get_transports", 00:05:38.043 "vhost_controller_set_coalescing", 00:05:38.043 "vhost_get_controllers", 00:05:38.043 "vhost_delete_controller", 00:05:38.043 "vhost_create_blk_controller", 00:05:38.043 "vhost_scsi_controller_remove_target", 00:05:38.043 "vhost_scsi_controller_add_target", 00:05:38.043 "vhost_start_scsi_controller", 00:05:38.043 "vhost_create_scsi_controller", 00:05:38.043 "thread_set_cpumask", 00:05:38.043 "scheduler_set_options", 00:05:38.043 "framework_get_governor", 00:05:38.043 "framework_get_scheduler", 00:05:38.043 "framework_set_scheduler", 00:05:38.043 "framework_get_reactors", 00:05:38.043 "thread_get_io_channels", 00:05:38.043 "thread_get_pollers", 00:05:38.043 "thread_get_stats", 00:05:38.043 "framework_monitor_context_switch", 00:05:38.043 "spdk_kill_instance", 00:05:38.043 "log_enable_timestamps", 00:05:38.043 "log_get_flags", 00:05:38.043 "log_clear_flag", 00:05:38.043 "log_set_flag", 00:05:38.043 "log_get_level", 00:05:38.043 "log_set_level", 00:05:38.043 "log_get_print_level", 00:05:38.043 "log_set_print_level", 00:05:38.043 "framework_enable_cpumask_locks", 00:05:38.043 "framework_disable_cpumask_locks", 00:05:38.043 "framework_wait_init", 00:05:38.043 "framework_start_init", 00:05:38.043 "scsi_get_devices", 00:05:38.043 "bdev_get_histogram", 00:05:38.043 "bdev_enable_histogram", 00:05:38.043 "bdev_set_qos_limit", 00:05:38.043 "bdev_set_qd_sampling_period", 00:05:38.043 "bdev_get_bdevs", 00:05:38.043 "bdev_reset_iostat", 00:05:38.043 "bdev_get_iostat", 00:05:38.043 "bdev_examine", 00:05:38.043 "bdev_wait_for_examine", 00:05:38.043 "bdev_set_options", 00:05:38.043 "accel_get_stats", 00:05:38.043 "accel_set_options", 00:05:38.043 "accel_set_driver", 00:05:38.043 "accel_crypto_key_destroy", 00:05:38.043 "accel_crypto_keys_get", 00:05:38.043 "accel_crypto_key_create", 00:05:38.043 "accel_assign_opc", 00:05:38.043 "accel_get_module_info", 00:05:38.043 "accel_get_opc_assignments", 00:05:38.043 "vmd_rescan", 00:05:38.043 "vmd_remove_device", 00:05:38.043 "vmd_enable", 00:05:38.043 "sock_get_default_impl", 00:05:38.043 "sock_set_default_impl", 00:05:38.043 "sock_impl_set_options", 00:05:38.043 "sock_impl_get_options", 00:05:38.043 "iobuf_get_stats", 00:05:38.043 "iobuf_set_options", 00:05:38.043 "keyring_get_keys", 00:05:38.043 "vfu_tgt_set_base_path", 00:05:38.043 "framework_get_pci_devices", 00:05:38.043 "framework_get_config", 00:05:38.043 "framework_get_subsystems", 00:05:38.043 "fsdev_set_opts", 00:05:38.043 "fsdev_get_opts", 00:05:38.043 "trace_get_info", 00:05:38.043 "trace_get_tpoint_group_mask", 00:05:38.043 "trace_disable_tpoint_group", 00:05:38.043 "trace_enable_tpoint_group", 00:05:38.043 "trace_clear_tpoint_mask", 00:05:38.043 "trace_set_tpoint_mask", 00:05:38.043 "notify_get_notifications", 00:05:38.043 "notify_get_types", 00:05:38.043 "spdk_get_version", 00:05:38.043 "rpc_get_methods" 00:05:38.043 ] 00:05:38.043 08:04:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.043 08:04:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:38.043 08:04:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1173856 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1173856 ']' 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1173856 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1173856 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1173856' 00:05:38.043 killing process with pid 1173856 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1173856 00:05:38.043 08:04:20 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1173856 00:05:38.613 00:05:38.613 real 0m1.140s 00:05:38.613 user 0m1.924s 00:05:38.613 sys 0m0.438s 00:05:38.613 08:04:20 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.613 08:04:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.613 ************************************ 00:05:38.613 END TEST spdkcli_tcp 00:05:38.613 ************************************ 00:05:38.613 08:04:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.613 08:04:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.613 08:04:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.613 08:04:20 -- common/autotest_common.sh@10 -- # set +x 00:05:38.613 ************************************ 00:05:38.613 START TEST dpdk_mem_utility 00:05:38.613 ************************************ 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.613 * Looking for test storage... 00:05:38.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.613 08:04:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.613 --rc genhtml_branch_coverage=1 00:05:38.613 --rc genhtml_function_coverage=1 00:05:38.613 --rc genhtml_legend=1 00:05:38.613 --rc geninfo_all_blocks=1 00:05:38.613 --rc geninfo_unexecuted_blocks=1 00:05:38.613 00:05:38.613 ' 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.613 --rc genhtml_branch_coverage=1 00:05:38.613 --rc genhtml_function_coverage=1 00:05:38.613 --rc genhtml_legend=1 00:05:38.613 --rc geninfo_all_blocks=1 00:05:38.613 --rc geninfo_unexecuted_blocks=1 00:05:38.613 00:05:38.613 ' 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.613 --rc genhtml_branch_coverage=1 00:05:38.613 --rc genhtml_function_coverage=1 00:05:38.613 --rc genhtml_legend=1 00:05:38.613 --rc geninfo_all_blocks=1 00:05:38.613 --rc geninfo_unexecuted_blocks=1 00:05:38.613 00:05:38.613 ' 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.613 --rc genhtml_branch_coverage=1 00:05:38.613 --rc genhtml_function_coverage=1 00:05:38.613 --rc genhtml_legend=1 00:05:38.613 --rc geninfo_all_blocks=1 00:05:38.613 --rc geninfo_unexecuted_blocks=1 00:05:38.613 00:05:38.613 ' 00:05:38.613 08:04:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:38.613 08:04:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1174124 00:05:38.613 08:04:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1174124 00:05:38.613 08:04:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1174124 ']' 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.613 08:04:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.613 [2024-11-28 08:04:20.864243] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:38.613 [2024-11-28 08:04:20.864292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174124 ] 00:05:38.873 [2024-11-28 08:04:20.925801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.873 [2024-11-28 08:04:20.968632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.133 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.133 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:39.133 08:04:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:39.133 08:04:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:39.133 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.133 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.133 { 00:05:39.133 "filename": "/tmp/spdk_mem_dump.txt" 00:05:39.133 } 00:05:39.133 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.133 08:04:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:39.133 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:39.133 1 heaps totaling size 818.000000 MiB 00:05:39.133 size: 818.000000 MiB heap id: 0 00:05:39.133 end heaps---------- 00:05:39.133 9 mempools totaling size 603.782043 MiB 00:05:39.133 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:39.133 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:39.133 size: 100.555481 MiB name: bdev_io_1174124 00:05:39.133 size: 50.003479 MiB name: msgpool_1174124 00:05:39.133 size: 36.509338 MiB name: fsdev_io_1174124 00:05:39.133 size: 21.763794 MiB name: PDU_Pool 00:05:39.133 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:39.133 size: 4.133484 MiB name: evtpool_1174124 00:05:39.133 size: 0.026123 MiB name: Session_Pool 00:05:39.133 end mempools------- 00:05:39.133 6 memzones totaling size 4.142822 MiB 00:05:39.133 size: 1.000366 MiB name: RG_ring_0_1174124 00:05:39.133 size: 1.000366 MiB name: RG_ring_1_1174124 00:05:39.133 size: 1.000366 MiB name: RG_ring_4_1174124 00:05:39.133 size: 1.000366 MiB name: RG_ring_5_1174124 00:05:39.133 size: 0.125366 MiB name: RG_ring_2_1174124 00:05:39.133 size: 0.015991 MiB name: RG_ring_3_1174124 00:05:39.133 end memzones------- 00:05:39.133 08:04:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:39.133 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:39.133 list of free elements. size: 10.852478 MiB 00:05:39.133 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:39.133 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:39.133 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:39.133 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:39.133 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:39.133 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:39.133 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:39.133 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:39.133 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:39.133 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:39.133 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:39.133 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:39.133 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:39.133 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:39.133 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:39.133 list of standard malloc elements. size: 199.218628 MiB 00:05:39.133 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:39.133 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:39.133 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:39.133 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:39.133 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:39.133 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:39.133 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:39.133 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:39.133 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:39.133 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:39.133 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:39.133 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:39.133 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:39.133 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:39.133 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:39.133 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:39.133 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:39.133 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:39.133 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:39.133 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:39.133 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:39.133 list of memzone associated elements. size: 607.928894 MiB 00:05:39.133 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:39.133 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:39.133 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:39.134 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:39.134 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:39.134 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1174124_0 00:05:39.134 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:39.134 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1174124_0 00:05:39.134 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:39.134 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1174124_0 00:05:39.134 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:39.134 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:39.134 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:39.134 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:39.134 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:39.134 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1174124_0 00:05:39.134 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:39.134 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1174124 00:05:39.134 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:39.134 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1174124 00:05:39.134 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:39.134 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:39.134 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:39.134 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:39.134 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:39.134 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:39.134 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:39.134 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:39.134 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:39.134 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1174124 00:05:39.134 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:39.134 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1174124 00:05:39.134 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:39.134 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1174124 00:05:39.134 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:39.134 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1174124 00:05:39.134 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:39.134 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1174124 00:05:39.134 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:39.134 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1174124 00:05:39.134 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:39.134 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:39.134 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:39.134 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:39.134 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:39.134 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:39.134 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:39.134 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1174124 00:05:39.134 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:39.134 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1174124 00:05:39.134 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:39.134 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:39.134 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:39.134 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:39.134 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:39.134 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1174124 00:05:39.134 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:39.134 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:39.134 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:39.134 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1174124 00:05:39.134 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:39.134 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1174124 00:05:39.134 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:39.134 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1174124 00:05:39.134 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:39.134 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:39.134 08:04:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:39.134 08:04:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1174124 00:05:39.134 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1174124 ']' 00:05:39.134 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1174124 00:05:39.134 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:39.134 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.134 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1174124 00:05:39.134 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.134 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.134 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1174124' 00:05:39.134 killing process with pid 1174124 00:05:39.134 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1174124 00:05:39.134 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1174124 00:05:39.393 00:05:39.393 real 0m0.996s 00:05:39.393 user 0m0.949s 00:05:39.393 sys 0m0.384s 00:05:39.393 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.393 08:04:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.393 ************************************ 00:05:39.393 END TEST dpdk_mem_utility 00:05:39.393 ************************************ 00:05:39.652 08:04:21 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:39.652 08:04:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.652 08:04:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.652 08:04:21 -- common/autotest_common.sh@10 -- # set +x 00:05:39.652 ************************************ 00:05:39.653 START TEST event 00:05:39.653 ************************************ 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:39.653 * Looking for test storage... 00:05:39.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.653 08:04:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.653 08:04:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.653 08:04:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.653 08:04:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.653 08:04:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.653 08:04:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.653 08:04:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.653 08:04:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.653 08:04:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.653 08:04:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.653 08:04:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.653 08:04:21 event -- scripts/common.sh@344 -- # case "$op" in 00:05:39.653 08:04:21 event -- scripts/common.sh@345 -- # : 1 00:05:39.653 08:04:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.653 08:04:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.653 08:04:21 event -- scripts/common.sh@365 -- # decimal 1 00:05:39.653 08:04:21 event -- scripts/common.sh@353 -- # local d=1 00:05:39.653 08:04:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.653 08:04:21 event -- scripts/common.sh@355 -- # echo 1 00:05:39.653 08:04:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.653 08:04:21 event -- scripts/common.sh@366 -- # decimal 2 00:05:39.653 08:04:21 event -- scripts/common.sh@353 -- # local d=2 00:05:39.653 08:04:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.653 08:04:21 event -- scripts/common.sh@355 -- # echo 2 00:05:39.653 08:04:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.653 08:04:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.653 08:04:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.653 08:04:21 event -- scripts/common.sh@368 -- # return 0 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.653 --rc genhtml_branch_coverage=1 00:05:39.653 --rc genhtml_function_coverage=1 00:05:39.653 --rc genhtml_legend=1 00:05:39.653 --rc geninfo_all_blocks=1 00:05:39.653 --rc geninfo_unexecuted_blocks=1 00:05:39.653 00:05:39.653 ' 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.653 --rc genhtml_branch_coverage=1 00:05:39.653 --rc genhtml_function_coverage=1 00:05:39.653 --rc genhtml_legend=1 00:05:39.653 --rc geninfo_all_blocks=1 00:05:39.653 --rc geninfo_unexecuted_blocks=1 00:05:39.653 00:05:39.653 ' 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.653 --rc genhtml_branch_coverage=1 00:05:39.653 --rc genhtml_function_coverage=1 00:05:39.653 --rc genhtml_legend=1 00:05:39.653 --rc geninfo_all_blocks=1 00:05:39.653 --rc geninfo_unexecuted_blocks=1 00:05:39.653 00:05:39.653 ' 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.653 --rc genhtml_branch_coverage=1 00:05:39.653 --rc genhtml_function_coverage=1 00:05:39.653 --rc genhtml_legend=1 00:05:39.653 --rc geninfo_all_blocks=1 00:05:39.653 --rc geninfo_unexecuted_blocks=1 00:05:39.653 00:05:39.653 ' 00:05:39.653 08:04:21 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:39.653 08:04:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:39.653 08:04:21 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:39.653 08:04:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.653 08:04:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.653 ************************************ 00:05:39.653 START TEST event_perf 00:05:39.653 ************************************ 00:05:39.653 08:04:21 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.653 Running I/O for 1 seconds...[2024-11-28 08:04:21.918882] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:39.653 [2024-11-28 08:04:21.918955] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174416 ] 00:05:39.912 [2024-11-28 08:04:21.985891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.912 [2024-11-28 08:04:22.029583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.912 [2024-11-28 08:04:22.029681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.912 [2024-11-28 08:04:22.029744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.912 [2024-11-28 08:04:22.029745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.851 Running I/O for 1 seconds... 00:05:40.851 lcore 0: 202299 00:05:40.851 lcore 1: 202301 00:05:40.851 lcore 2: 202300 00:05:40.851 lcore 3: 202301 00:05:40.851 done. 00:05:40.851 00:05:40.851 real 0m1.173s 00:05:40.851 user 0m4.101s 00:05:40.851 sys 0m0.069s 00:05:40.851 08:04:23 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.851 08:04:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.851 ************************************ 00:05:40.851 END TEST event_perf 00:05:40.851 ************************************ 00:05:40.851 08:04:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:40.851 08:04:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:40.851 08:04:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.851 08:04:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.111 ************************************ 00:05:41.111 START TEST event_reactor 00:05:41.111 ************************************ 00:05:41.111 08:04:23 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.111 [2024-11-28 08:04:23.147534] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:41.111 [2024-11-28 08:04:23.147605] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174670 ] 00:05:41.111 [2024-11-28 08:04:23.211583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.111 [2024-11-28 08:04:23.249192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.049 test_start 00:05:42.049 oneshot 00:05:42.049 tick 100 00:05:42.049 tick 100 00:05:42.049 tick 250 00:05:42.049 tick 100 00:05:42.049 tick 100 00:05:42.049 tick 250 00:05:42.049 tick 100 00:05:42.049 tick 500 00:05:42.049 tick 100 00:05:42.049 tick 100 00:05:42.049 tick 250 00:05:42.049 tick 100 00:05:42.049 tick 100 00:05:42.049 test_end 00:05:42.049 00:05:42.049 real 0m1.162s 00:05:42.049 user 0m1.093s 00:05:42.049 sys 0m0.066s 00:05:42.049 08:04:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.049 08:04:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:42.049 ************************************ 00:05:42.049 END TEST event_reactor 00:05:42.049 ************************************ 00:05:42.309 08:04:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.309 08:04:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:42.309 08:04:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.309 08:04:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.309 ************************************ 00:05:42.309 START TEST event_reactor_perf 00:05:42.309 ************************************ 00:05:42.309 08:04:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.309 [2024-11-28 08:04:24.370818] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:42.309 [2024-11-28 08:04:24.370893] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174918 ] 00:05:42.309 [2024-11-28 08:04:24.435064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.309 [2024-11-28 08:04:24.472109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.248 test_start 00:05:43.248 test_end 00:05:43.248 Performance: 494419 events per second 00:05:43.248 00:05:43.248 real 0m1.163s 00:05:43.248 user 0m1.101s 00:05:43.248 sys 0m0.059s 00:05:43.248 08:04:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.248 08:04:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.248 ************************************ 00:05:43.248 END TEST event_reactor_perf 00:05:43.248 ************************************ 00:05:43.507 08:04:25 event -- event/event.sh@49 -- # uname -s 00:05:43.507 08:04:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:43.507 08:04:25 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:43.507 08:04:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.507 08:04:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.507 08:04:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.507 ************************************ 00:05:43.507 START TEST event_scheduler 00:05:43.507 ************************************ 00:05:43.507 08:04:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:43.507 * Looking for test storage... 00:05:43.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:43.507 08:04:25 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.507 08:04:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.507 08:04:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.507 08:04:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.507 08:04:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:43.507 08:04:25 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.507 08:04:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.507 --rc genhtml_branch_coverage=1 00:05:43.507 --rc genhtml_function_coverage=1 00:05:43.507 --rc genhtml_legend=1 00:05:43.507 --rc geninfo_all_blocks=1 00:05:43.507 --rc geninfo_unexecuted_blocks=1 00:05:43.507 00:05:43.507 ' 00:05:43.508 08:04:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.508 --rc genhtml_branch_coverage=1 00:05:43.508 --rc genhtml_function_coverage=1 00:05:43.508 --rc genhtml_legend=1 00:05:43.508 --rc geninfo_all_blocks=1 00:05:43.508 --rc geninfo_unexecuted_blocks=1 00:05:43.508 00:05:43.508 ' 00:05:43.508 08:04:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.508 --rc genhtml_branch_coverage=1 00:05:43.508 --rc genhtml_function_coverage=1 00:05:43.508 --rc genhtml_legend=1 00:05:43.508 --rc geninfo_all_blocks=1 00:05:43.508 --rc geninfo_unexecuted_blocks=1 00:05:43.508 00:05:43.508 ' 00:05:43.508 08:04:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.508 --rc genhtml_branch_coverage=1 00:05:43.508 --rc genhtml_function_coverage=1 00:05:43.508 --rc genhtml_legend=1 00:05:43.508 --rc geninfo_all_blocks=1 00:05:43.508 --rc geninfo_unexecuted_blocks=1 00:05:43.508 00:05:43.508 ' 00:05:43.508 08:04:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:43.508 08:04:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1175198 00:05:43.508 08:04:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:43.508 08:04:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.508 08:04:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1175198 00:05:43.508 08:04:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1175198 ']' 00:05:43.508 08:04:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.508 08:04:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.508 08:04:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.508 08:04:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.508 08:04:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.508 [2024-11-28 08:04:25.766014] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:43.508 [2024-11-28 08:04:25.766063] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175198 ] 00:05:43.767 [2024-11-28 08:04:25.824208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.767 [2024-11-28 08:04:25.867591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.767 [2024-11-28 08:04:25.867674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.767 [2024-11-28 08:04:25.867762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.767 [2024-11-28 08:04:25.867764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.767 08:04:25 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.767 08:04:25 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:43.767 08:04:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:43.767 08:04:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.767 08:04:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.767 [2024-11-28 08:04:25.932371] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:43.767 [2024-11-28 08:04:25.932391] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:43.767 [2024-11-28 08:04:25.932401] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:43.767 [2024-11-28 08:04:25.932406] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:43.767 [2024-11-28 08:04:25.932412] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:43.767 08:04:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.767 08:04:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:43.767 08:04:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.767 08:04:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.767 [2024-11-28 08:04:26.007645] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:43.767 08:04:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.767 08:04:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:43.767 08:04:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.767 08:04:26 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.767 08:04:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.026 ************************************ 00:05:44.027 START TEST scheduler_create_thread 00:05:44.027 ************************************ 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.027 2 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.027 3 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.027 4 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.027 5 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.027 6 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.027 7 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.027 8 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.027 9 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.027 10 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.027 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.595 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.595 08:04:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:44.595 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.595 08:04:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.974 08:04:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.974 08:04:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:45.974 08:04:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:45.974 08:04:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.974 08:04:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.912 08:04:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.912 00:05:46.912 real 0m3.099s 00:05:46.912 user 0m0.025s 00:05:46.912 sys 0m0.005s 00:05:46.912 08:04:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.912 08:04:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.912 ************************************ 00:05:46.912 END TEST scheduler_create_thread 00:05:46.912 ************************************ 00:05:46.912 08:04:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:46.912 08:04:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1175198 00:05:46.912 08:04:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1175198 ']' 00:05:46.912 08:04:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1175198 00:05:46.912 08:04:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:47.171 08:04:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.171 08:04:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1175198 00:05:47.171 08:04:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:47.171 08:04:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:47.171 08:04:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1175198' 00:05:47.171 killing process with pid 1175198 00:05:47.171 08:04:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1175198 00:05:47.171 08:04:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1175198 00:05:47.430 [2024-11-28 08:04:29.522818] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:47.689 00:05:47.689 real 0m4.142s 00:05:47.689 user 0m6.700s 00:05:47.689 sys 0m0.335s 00:05:47.689 08:04:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.690 08:04:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.690 ************************************ 00:05:47.690 END TEST event_scheduler 00:05:47.690 ************************************ 00:05:47.690 08:04:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:47.690 08:04:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:47.690 08:04:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.690 08:04:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.690 08:04:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.690 ************************************ 00:05:47.690 START TEST app_repeat 00:05:47.690 ************************************ 00:05:47.690 08:04:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1175943 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1175943' 00:05:47.690 Process app_repeat pid: 1175943 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:47.690 spdk_app_start Round 0 00:05:47.690 08:04:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1175943 /var/tmp/spdk-nbd.sock 00:05:47.690 08:04:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1175943 ']' 00:05:47.690 08:04:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.690 08:04:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.690 08:04:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.690 08:04:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.690 08:04:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.690 [2024-11-28 08:04:29.815811] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:05:47.690 [2024-11-28 08:04:29.815863] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175943 ] 00:05:47.690 [2024-11-28 08:04:29.877704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.690 [2024-11-28 08:04:29.918914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.690 [2024-11-28 08:04:29.918918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.949 08:04:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.949 08:04:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:47.949 08:04:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.949 Malloc0 00:05:47.949 08:04:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.209 Malloc1 00:05:48.209 08:04:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.209 08:04:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.469 /dev/nbd0 00:05:48.469 08:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.469 08:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.469 1+0 records in 00:05:48.469 1+0 records out 00:05:48.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161089 s, 25.4 MB/s 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.469 08:04:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.469 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.469 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.469 08:04:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.728 /dev/nbd1 00:05:48.728 08:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.728 08:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.728 08:04:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:48.728 08:04:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.728 08:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.728 08:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.728 08:04:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:48.728 08:04:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.728 08:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.728 08:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.728 08:04:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.728 1+0 records in 00:05:48.728 1+0 records out 00:05:48.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189552 s, 21.6 MB/s 00:05:48.729 08:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.729 08:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.729 08:04:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.729 08:04:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.729 08:04:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.729 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.729 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.729 08:04:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.729 08:04:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.729 08:04:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.988 { 00:05:48.988 "nbd_device": "/dev/nbd0", 00:05:48.988 "bdev_name": "Malloc0" 00:05:48.988 }, 00:05:48.988 { 00:05:48.988 "nbd_device": "/dev/nbd1", 00:05:48.988 "bdev_name": "Malloc1" 00:05:48.988 } 00:05:48.988 ]' 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.988 { 00:05:48.988 "nbd_device": "/dev/nbd0", 00:05:48.988 "bdev_name": "Malloc0" 00:05:48.988 }, 00:05:48.988 { 00:05:48.988 "nbd_device": "/dev/nbd1", 00:05:48.988 "bdev_name": "Malloc1" 00:05:48.988 } 00:05:48.988 ]' 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.988 /dev/nbd1' 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.988 /dev/nbd1' 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.988 256+0 records in 00:05:48.988 256+0 records out 00:05:48.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00998562 s, 105 MB/s 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.988 256+0 records in 00:05:48.988 256+0 records out 00:05:48.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137255 s, 76.4 MB/s 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.988 256+0 records in 00:05:48.988 256+0 records out 00:05:48.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154737 s, 67.8 MB/s 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.988 08:04:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.248 08:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.248 08:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.248 08:04:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.248 08:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.248 08:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.248 08:04:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.248 08:04:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.248 08:04:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.248 08:04:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.248 08:04:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.507 08:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.507 08:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.507 08:04:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.507 08:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.507 08:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.507 08:04:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.507 08:04:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.507 08:04:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.507 08:04:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.507 08:04:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.507 08:04:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.766 08:04:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.766 08:04:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.025 08:04:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.025 [2024-11-28 08:04:32.229187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.025 [2024-11-28 08:04:32.266170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.025 [2024-11-28 08:04:32.266173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.285 [2024-11-28 08:04:32.307377] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.285 [2024-11-28 08:04:32.307419] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.819 08:04:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.819 08:04:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:52.819 spdk_app_start Round 1 00:05:52.819 08:04:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1175943 /var/tmp/spdk-nbd.sock 00:05:52.819 08:04:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1175943 ']' 00:05:52.819 08:04:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.819 08:04:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.819 08:04:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.819 08:04:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.819 08:04:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.076 08:04:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.076 08:04:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.076 08:04:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.335 Malloc0 00:05:53.335 08:04:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.594 Malloc1 00:05:53.594 08:04:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.594 08:04:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.853 /dev/nbd0 00:05:53.853 08:04:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.853 08:04:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.853 08:04:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:53.853 08:04:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.853 08:04:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.853 08:04:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.853 08:04:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:53.853 08:04:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.853 08:04:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.853 08:04:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.853 08:04:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.853 1+0 records in 00:05:53.853 1+0 records out 00:05:53.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002132 s, 19.2 MB/s 00:05:53.854 08:04:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.854 08:04:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.854 08:04:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.854 08:04:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.854 08:04:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.854 08:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.854 08:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.854 08:04:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.113 /dev/nbd1 00:05:54.113 08:04:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.113 08:04:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.113 1+0 records in 00:05:54.113 1+0 records out 00:05:54.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000125171 s, 32.7 MB/s 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.113 08:04:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.113 08:04:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.113 08:04:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.113 08:04:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.113 08:04:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.113 08:04:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.113 08:04:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.113 { 00:05:54.113 "nbd_device": "/dev/nbd0", 00:05:54.113 "bdev_name": "Malloc0" 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "nbd_device": "/dev/nbd1", 00:05:54.113 "bdev_name": "Malloc1" 00:05:54.113 } 00:05:54.113 ]' 00:05:54.113 08:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.113 { 00:05:54.113 "nbd_device": "/dev/nbd0", 00:05:54.113 "bdev_name": "Malloc0" 00:05:54.113 }, 00:05:54.113 { 00:05:54.113 "nbd_device": "/dev/nbd1", 00:05:54.113 "bdev_name": "Malloc1" 00:05:54.113 } 00:05:54.113 ]' 00:05:54.113 08:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.373 /dev/nbd1' 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.373 /dev/nbd1' 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.373 256+0 records in 00:05:54.373 256+0 records out 00:05:54.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101008 s, 104 MB/s 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.373 256+0 records in 00:05:54.373 256+0 records out 00:05:54.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139967 s, 74.9 MB/s 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.373 256+0 records in 00:05:54.373 256+0 records out 00:05:54.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015105 s, 69.4 MB/s 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.373 08:04:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.632 08:04:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.891 08:04:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.891 08:04:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.891 08:04:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.891 08:04:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.891 08:04:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.891 08:04:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.891 08:04:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.150 08:04:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.410 [2024-11-28 08:04:37.496769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.410 [2024-11-28 08:04:37.534426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.410 [2024-11-28 08:04:37.534428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.410 [2024-11-28 08:04:37.576347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.410 [2024-11-28 08:04:37.576389] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.699 08:04:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.699 08:04:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:58.699 spdk_app_start Round 2 00:05:58.699 08:04:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1175943 /var/tmp/spdk-nbd.sock 00:05:58.699 08:04:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1175943 ']' 00:05:58.699 08:04:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.699 08:04:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.699 08:04:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.699 08:04:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.699 08:04:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.699 08:04:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.699 08:04:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:58.699 08:04:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.699 Malloc0 00:05:58.699 08:04:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.699 Malloc1 00:05:58.699 08:04:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.699 08:04:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.958 /dev/nbd0 00:05:58.958 08:04:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.958 08:04:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.958 1+0 records in 00:05:58.958 1+0 records out 00:05:58.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211377 s, 19.4 MB/s 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.958 08:04:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:58.958 08:04:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.958 08:04:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.958 08:04:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.217 /dev/nbd1 00:05:59.217 08:04:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.217 08:04:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.217 1+0 records in 00:05:59.217 1+0 records out 00:05:59.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221742 s, 18.5 MB/s 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.217 08:04:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.217 08:04:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.217 08:04:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.217 08:04:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.217 08:04:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.217 08:04:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.476 08:04:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.476 { 00:05:59.476 "nbd_device": "/dev/nbd0", 00:05:59.476 "bdev_name": "Malloc0" 00:05:59.476 }, 00:05:59.476 { 00:05:59.476 "nbd_device": "/dev/nbd1", 00:05:59.476 "bdev_name": "Malloc1" 00:05:59.476 } 00:05:59.476 ]' 00:05:59.476 08:04:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.476 { 00:05:59.476 "nbd_device": "/dev/nbd0", 00:05:59.476 "bdev_name": "Malloc0" 00:05:59.476 }, 00:05:59.476 { 00:05:59.476 "nbd_device": "/dev/nbd1", 00:05:59.476 "bdev_name": "Malloc1" 00:05:59.476 } 00:05:59.476 ]' 00:05:59.476 08:04:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.476 08:04:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.476 /dev/nbd1' 00:05:59.476 08:04:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.476 /dev/nbd1' 00:05:59.476 08:04:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.476 08:04:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.476 08:04:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.476 08:04:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.476 08:04:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.476 08:04:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.477 256+0 records in 00:05:59.477 256+0 records out 00:05:59.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444858 s, 236 MB/s 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.477 256+0 records in 00:05:59.477 256+0 records out 00:05:59.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141986 s, 73.9 MB/s 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.477 256+0 records in 00:05:59.477 256+0 records out 00:05:59.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150991 s, 69.4 MB/s 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.477 08:04:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.736 08:04:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.736 08:04:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.736 08:04:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.736 08:04:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.736 08:04:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.736 08:04:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.736 08:04:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.736 08:04:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.736 08:04:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.736 08:04:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.995 08:04:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.995 08:04:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.995 08:04:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.995 08:04:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.995 08:04:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.995 08:04:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.995 08:04:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.995 08:04:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.995 08:04:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.995 08:04:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.995 08:04:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.254 08:04:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.254 08:04:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.514 08:04:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.514 [2024-11-28 08:04:42.736105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.514 [2024-11-28 08:04:42.773448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.514 [2024-11-28 08:04:42.773450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.773 [2024-11-28 08:04:42.814998] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.773 [2024-11-28 08:04:42.815041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.097 08:04:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1175943 /var/tmp/spdk-nbd.sock 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1175943 ']' 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.097 08:04:45 event.app_repeat -- event/event.sh@39 -- # killprocess 1175943 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1175943 ']' 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1175943 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1175943 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1175943' 00:06:04.097 killing process with pid 1175943 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1175943 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1175943 00:06:04.097 spdk_app_start is called in Round 0. 00:06:04.097 Shutdown signal received, stop current app iteration 00:06:04.097 Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 reinitialization... 00:06:04.097 spdk_app_start is called in Round 1. 00:06:04.097 Shutdown signal received, stop current app iteration 00:06:04.097 Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 reinitialization... 00:06:04.097 spdk_app_start is called in Round 2. 00:06:04.097 Shutdown signal received, stop current app iteration 00:06:04.097 Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 reinitialization... 00:06:04.097 spdk_app_start is called in Round 3. 00:06:04.097 Shutdown signal received, stop current app iteration 00:06:04.097 08:04:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:04.097 08:04:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:04.097 00:06:04.097 real 0m16.190s 00:06:04.097 user 0m35.517s 00:06:04.097 sys 0m2.504s 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.097 08:04:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.097 ************************************ 00:06:04.097 END TEST app_repeat 00:06:04.097 ************************************ 00:06:04.097 08:04:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:04.097 08:04:46 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:04.097 08:04:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.097 08:04:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.097 08:04:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.097 ************************************ 00:06:04.097 START TEST cpu_locks 00:06:04.097 ************************************ 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:04.097 * Looking for test storage... 00:06:04.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.097 08:04:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.097 --rc genhtml_branch_coverage=1 00:06:04.097 --rc genhtml_function_coverage=1 00:06:04.097 --rc genhtml_legend=1 00:06:04.097 --rc geninfo_all_blocks=1 00:06:04.097 --rc geninfo_unexecuted_blocks=1 00:06:04.097 00:06:04.097 ' 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.097 --rc genhtml_branch_coverage=1 00:06:04.097 --rc genhtml_function_coverage=1 00:06:04.097 --rc genhtml_legend=1 00:06:04.097 --rc geninfo_all_blocks=1 00:06:04.097 --rc geninfo_unexecuted_blocks=1 00:06:04.097 00:06:04.097 ' 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.097 --rc genhtml_branch_coverage=1 00:06:04.097 --rc genhtml_function_coverage=1 00:06:04.097 --rc genhtml_legend=1 00:06:04.097 --rc geninfo_all_blocks=1 00:06:04.097 --rc geninfo_unexecuted_blocks=1 00:06:04.097 00:06:04.097 ' 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.097 --rc genhtml_branch_coverage=1 00:06:04.097 --rc genhtml_function_coverage=1 00:06:04.097 --rc genhtml_legend=1 00:06:04.097 --rc geninfo_all_blocks=1 00:06:04.097 --rc geninfo_unexecuted_blocks=1 00:06:04.097 00:06:04.097 ' 00:06:04.097 08:04:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:04.097 08:04:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:04.097 08:04:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:04.097 08:04:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.097 08:04:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.097 ************************************ 00:06:04.097 START TEST default_locks 00:06:04.097 ************************************ 00:06:04.097 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:04.097 08:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1178938 00:06:04.097 08:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1178938 00:06:04.097 08:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.097 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1178938 ']' 00:06:04.097 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.097 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.097 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.097 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.097 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.097 [2024-11-28 08:04:46.302504] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:04.097 [2024-11-28 08:04:46.302549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178938 ] 00:06:04.097 [2024-11-28 08:04:46.363236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.367 [2024-11-28 08:04:46.407096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.368 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.368 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:04.368 08:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1178938 00:06:04.368 08:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1178938 00:06:04.368 08:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.665 lslocks: write error 00:06:04.665 08:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1178938 00:06:04.665 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1178938 ']' 00:06:04.665 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1178938 00:06:04.665 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:04.665 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.665 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1178938 00:06:04.990 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.990 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.990 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1178938' 00:06:04.990 killing process with pid 1178938 00:06:04.990 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1178938 00:06:04.990 08:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1178938 00:06:05.278 08:04:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1178938 00:06:05.278 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:05.278 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1178938 00:06:05.278 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:05.278 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.278 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:05.278 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.278 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1178938 00:06:05.278 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1178938 ']' 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1178938) - No such process 00:06:05.279 ERROR: process (pid: 1178938) is no longer running 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.279 00:06:05.279 real 0m1.027s 00:06:05.279 user 0m0.991s 00:06:05.279 sys 0m0.457s 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.279 08:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.279 ************************************ 00:06:05.279 END TEST default_locks 00:06:05.279 ************************************ 00:06:05.279 08:04:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:05.279 08:04:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.279 08:04:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.279 08:04:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.279 ************************************ 00:06:05.279 START TEST default_locks_via_rpc 00:06:05.279 ************************************ 00:06:05.279 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:05.279 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1179201 00:06:05.279 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1179201 00:06:05.279 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.279 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1179201 ']' 00:06:05.279 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.279 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.279 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.279 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.279 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.279 [2024-11-28 08:04:47.397961] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:05.279 [2024-11-28 08:04:47.398006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179201 ] 00:06:05.279 [2024-11-28 08:04:47.459629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.279 [2024-11-28 08:04:47.498406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1179201 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1179201 00:06:05.564 08:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.861 08:04:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1179201 00:06:05.861 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1179201 ']' 00:06:05.861 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1179201 00:06:05.861 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:05.861 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.861 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1179201 00:06:06.192 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.192 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.192 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1179201' 00:06:06.192 killing process with pid 1179201 00:06:06.192 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1179201 00:06:06.192 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1179201 00:06:06.474 00:06:06.474 real 0m1.115s 00:06:06.474 user 0m1.079s 00:06:06.474 sys 0m0.495s 00:06:06.474 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.474 08:04:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.474 ************************************ 00:06:06.474 END TEST default_locks_via_rpc 00:06:06.474 ************************************ 00:06:06.474 08:04:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:06.474 08:04:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.474 08:04:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.474 08:04:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.474 ************************************ 00:06:06.474 START TEST non_locking_app_on_locked_coremask 00:06:06.474 ************************************ 00:06:06.474 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:06.474 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1179418 00:06:06.474 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1179418 /var/tmp/spdk.sock 00:06:06.474 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.474 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1179418 ']' 00:06:06.474 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.474 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.474 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.474 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.474 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.474 [2024-11-28 08:04:48.586500] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:06.475 [2024-11-28 08:04:48.586547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179418 ] 00:06:06.475 [2024-11-28 08:04:48.648961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.475 [2024-11-28 08:04:48.689419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.744 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.744 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.744 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1179478 00:06:06.744 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1179478 /var/tmp/spdk2.sock 00:06:06.744 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:06.744 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1179478 ']' 00:06:06.744 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.744 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.744 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.744 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.744 08:04:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.744 [2024-11-28 08:04:48.951984] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:06.744 [2024-11-28 08:04:48.952030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179478 ] 00:06:07.015 [2024-11-28 08:04:49.051037] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.015 [2024-11-28 08:04:49.051067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.015 [2024-11-28 08:04:49.131921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.635 08:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.635 08:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.635 08:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1179418 00:06:07.635 08:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1179418 00:06:07.635 08:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.231 lslocks: write error 00:06:08.231 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1179418 00:06:08.231 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1179418 ']' 00:06:08.231 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1179418 00:06:08.231 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.231 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.231 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1179418 00:06:08.231 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.231 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.231 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1179418' 00:06:08.231 killing process with pid 1179418 00:06:08.231 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1179418 00:06:08.231 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1179418 00:06:08.813 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1179478 00:06:08.813 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1179478 ']' 00:06:08.813 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1179478 00:06:08.813 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.813 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.813 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1179478 00:06:08.813 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.813 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.813 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1179478' 00:06:08.813 killing process with pid 1179478 00:06:08.813 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1179478 00:06:08.813 08:04:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1179478 00:06:09.071 00:06:09.071 real 0m2.680s 00:06:09.071 user 0m2.839s 00:06:09.071 sys 0m0.866s 00:06:09.071 08:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.071 08:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.071 ************************************ 00:06:09.071 END TEST non_locking_app_on_locked_coremask 00:06:09.071 ************************************ 00:06:09.071 08:04:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:09.071 08:04:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.071 08:04:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.071 08:04:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.071 ************************************ 00:06:09.071 START TEST locking_app_on_unlocked_coremask 00:06:09.071 ************************************ 00:06:09.071 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:09.071 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1179930 00:06:09.071 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1179930 /var/tmp/spdk.sock 00:06:09.071 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:09.071 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1179930 ']' 00:06:09.071 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.071 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.071 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.072 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.072 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.072 [2024-11-28 08:04:51.333559] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:09.072 [2024-11-28 08:04:51.333602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179930 ] 00:06:09.330 [2024-11-28 08:04:51.395637] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.330 [2024-11-28 08:04:51.395663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.330 [2024-11-28 08:04:51.438668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.589 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.589 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.589 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1179990 00:06:09.589 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1179990 /var/tmp/spdk2.sock 00:06:09.589 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.589 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1179990 ']' 00:06:09.589 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.589 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.589 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.589 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.589 08:04:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.589 [2024-11-28 08:04:51.694727] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:09.589 [2024-11-28 08:04:51.694774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179990 ] 00:06:09.589 [2024-11-28 08:04:51.786387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.848 [2024-11-28 08:04:51.875777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.414 08:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.414 08:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.414 08:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1179990 00:06:10.414 08:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1179990 00:06:10.414 08:04:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.980 lslocks: write error 00:06:10.980 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1179930 00:06:10.980 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1179930 ']' 00:06:10.980 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1179930 00:06:10.980 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.980 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.980 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1179930 00:06:10.980 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.980 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.980 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1179930' 00:06:10.980 killing process with pid 1179930 00:06:10.980 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1179930 00:06:10.980 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1179930 00:06:11.915 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1179990 00:06:11.915 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1179990 ']' 00:06:11.915 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1179990 00:06:11.915 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.915 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.915 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1179990 00:06:11.915 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.915 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.915 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1179990' 00:06:11.915 killing process with pid 1179990 00:06:11.915 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1179990 00:06:11.915 08:04:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1179990 00:06:12.175 00:06:12.175 real 0m2.920s 00:06:12.175 user 0m3.070s 00:06:12.175 sys 0m0.978s 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.175 ************************************ 00:06:12.175 END TEST locking_app_on_unlocked_coremask 00:06:12.175 ************************************ 00:06:12.175 08:04:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:12.175 08:04:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.175 08:04:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.175 08:04:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.175 ************************************ 00:06:12.175 START TEST locking_app_on_locked_coremask 00:06:12.175 ************************************ 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1180481 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1180481 /var/tmp/spdk.sock 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1180481 ']' 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.175 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.175 [2024-11-28 08:04:54.315563] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:12.175 [2024-11-28 08:04:54.315601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180481 ] 00:06:12.175 [2024-11-28 08:04:54.377469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.175 [2024-11-28 08:04:54.420333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1180484 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1180484 /var/tmp/spdk2.sock 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1180484 /var/tmp/spdk2.sock 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1180484 /var/tmp/spdk2.sock 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1180484 ']' 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.433 08:04:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.433 [2024-11-28 08:04:54.678064] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:12.433 [2024-11-28 08:04:54.678114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180484 ] 00:06:12.691 [2024-11-28 08:04:54.767817] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1180481 has claimed it. 00:06:12.691 [2024-11-28 08:04:54.767846] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:13.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1180484) - No such process 00:06:13.256 ERROR: process (pid: 1180484) is no longer running 00:06:13.256 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.256 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:13.256 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:13.256 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.256 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.256 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.256 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1180481 00:06:13.256 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1180481 00:06:13.256 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.823 lslocks: write error 00:06:13.823 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1180481 00:06:13.823 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1180481 ']' 00:06:13.823 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1180481 00:06:13.823 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.823 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.823 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1180481 00:06:13.823 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.823 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.823 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1180481' 00:06:13.823 killing process with pid 1180481 00:06:13.823 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1180481 00:06:13.823 08:04:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1180481 00:06:14.082 00:06:14.082 real 0m1.883s 00:06:14.082 user 0m2.023s 00:06:14.083 sys 0m0.645s 00:06:14.083 08:04:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.083 08:04:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.083 ************************************ 00:06:14.083 END TEST locking_app_on_locked_coremask 00:06:14.083 ************************************ 00:06:14.083 08:04:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:14.083 08:04:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.083 08:04:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.083 08:04:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.083 ************************************ 00:06:14.083 START TEST locking_overlapped_coremask 00:06:14.083 ************************************ 00:06:14.083 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:14.083 08:04:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1180754 00:06:14.083 08:04:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1180754 /var/tmp/spdk.sock 00:06:14.083 08:04:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:14.083 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1180754 ']' 00:06:14.083 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.083 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.083 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.083 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.083 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.083 [2024-11-28 08:04:56.271320] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:14.083 [2024-11-28 08:04:56.271361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180754 ] 00:06:14.083 [2024-11-28 08:04:56.332835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.342 [2024-11-28 08:04:56.378786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.342 [2024-11-28 08:04:56.378885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.342 [2024-11-28 08:04:56.378885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1180809 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1180809 /var/tmp/spdk2.sock 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1180809 /var/tmp/spdk2.sock 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1180809 /var/tmp/spdk2.sock 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1180809 ']' 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.342 08:04:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.601 [2024-11-28 08:04:56.648453] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:14.601 [2024-11-28 08:04:56.648502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180809 ] 00:06:14.601 [2024-11-28 08:04:56.740352] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1180754 has claimed it. 00:06:14.601 [2024-11-28 08:04:56.740389] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1180809) - No such process 00:06:15.168 ERROR: process (pid: 1180809) is no longer running 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1180754 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1180754 ']' 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1180754 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1180754 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1180754' 00:06:15.168 killing process with pid 1180754 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1180754 00:06:15.168 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1180754 00:06:15.428 00:06:15.428 real 0m1.436s 00:06:15.428 user 0m3.998s 00:06:15.428 sys 0m0.384s 00:06:15.428 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.428 08:04:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.428 ************************************ 00:06:15.428 END TEST locking_overlapped_coremask 00:06:15.428 ************************************ 00:06:15.428 08:04:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:15.428 08:04:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.428 08:04:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.428 08:04:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.687 ************************************ 00:06:15.687 START TEST locking_overlapped_coremask_via_rpc 00:06:15.687 ************************************ 00:06:15.687 08:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:15.687 08:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1181017 00:06:15.687 08:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1181017 /var/tmp/spdk.sock 00:06:15.687 08:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:15.687 08:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1181017 ']' 00:06:15.687 08:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.687 08:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.687 08:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.687 08:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.687 08:04:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.687 [2024-11-28 08:04:57.780758] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:15.687 [2024-11-28 08:04:57.780805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181017 ] 00:06:15.687 [2024-11-28 08:04:57.843205] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.687 [2024-11-28 08:04:57.843227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.687 [2024-11-28 08:04:57.885232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.687 [2024-11-28 08:04:57.885329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.687 [2024-11-28 08:04:57.885332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.946 08:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.946 08:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:15.946 08:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1181151 00:06:15.946 08:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1181151 /var/tmp/spdk2.sock 00:06:15.946 08:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:15.946 08:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1181151 ']' 00:06:15.946 08:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.946 08:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.946 08:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.946 08:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.946 08:04:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.946 [2024-11-28 08:04:58.156025] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:15.946 [2024-11-28 08:04:58.156077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181151 ] 00:06:16.206 [2024-11-28 08:04:58.250259] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.206 [2024-11-28 08:04:58.250289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.206 [2024-11-28 08:04:58.338190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.206 [2024-11-28 08:04:58.338303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.206 [2024-11-28 08:04:58.338304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.776 [2024-11-28 08:04:59.027025] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1181017 has claimed it. 00:06:16.776 request: 00:06:16.776 { 00:06:16.776 "method": "framework_enable_cpumask_locks", 00:06:16.776 "req_id": 1 00:06:16.776 } 00:06:16.776 Got JSON-RPC error response 00:06:16.776 response: 00:06:16.776 { 00:06:16.776 "code": -32603, 00:06:16.776 "message": "Failed to claim CPU core: 2" 00:06:16.776 } 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1181017 /var/tmp/spdk.sock 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1181017 ']' 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.776 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.036 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.036 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.036 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1181151 /var/tmp/spdk2.sock 00:06:17.036 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1181151 ']' 00:06:17.036 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.036 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.036 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.036 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.036 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.296 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.296 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.296 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:17.296 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.296 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.296 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.296 00:06:17.296 real 0m1.717s 00:06:17.296 user 0m0.852s 00:06:17.296 sys 0m0.122s 00:06:17.296 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.296 08:04:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.296 ************************************ 00:06:17.296 END TEST locking_overlapped_coremask_via_rpc 00:06:17.296 ************************************ 00:06:17.296 08:04:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:17.296 08:04:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1181017 ]] 00:06:17.296 08:04:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1181017 00:06:17.296 08:04:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1181017 ']' 00:06:17.296 08:04:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1181017 00:06:17.296 08:04:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:17.296 08:04:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.296 08:04:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1181017 00:06:17.296 08:04:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.296 08:04:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.296 08:04:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1181017' 00:06:17.296 killing process with pid 1181017 00:06:17.296 08:04:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1181017 00:06:17.296 08:04:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1181017 00:06:17.867 08:04:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1181151 ]] 00:06:17.867 08:04:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1181151 00:06:17.867 08:04:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1181151 ']' 00:06:17.867 08:04:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1181151 00:06:17.867 08:04:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:17.867 08:04:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.867 08:04:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1181151 00:06:17.867 08:04:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:17.867 08:04:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:17.867 08:04:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1181151' 00:06:17.867 killing process with pid 1181151 00:06:17.867 08:04:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1181151 00:06:17.867 08:04:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1181151 00:06:18.127 08:05:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.127 08:05:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:18.127 08:05:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1181017 ]] 00:06:18.127 08:05:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1181017 00:06:18.127 08:05:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1181017 ']' 00:06:18.127 08:05:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1181017 00:06:18.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1181017) - No such process 00:06:18.127 08:05:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1181017 is not found' 00:06:18.127 Process with pid 1181017 is not found 00:06:18.127 08:05:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1181151 ]] 00:06:18.127 08:05:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1181151 00:06:18.127 08:05:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1181151 ']' 00:06:18.127 08:05:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1181151 00:06:18.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1181151) - No such process 00:06:18.127 08:05:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1181151 is not found' 00:06:18.127 Process with pid 1181151 is not found 00:06:18.127 08:05:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.127 00:06:18.127 real 0m14.174s 00:06:18.127 user 0m24.700s 00:06:18.127 sys 0m4.900s 00:06:18.127 08:05:00 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.127 08:05:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.127 ************************************ 00:06:18.127 END TEST cpu_locks 00:06:18.127 ************************************ 00:06:18.127 00:06:18.127 real 0m38.544s 00:06:18.127 user 1m13.430s 00:06:18.127 sys 0m8.291s 00:06:18.127 08:05:00 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.127 08:05:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.127 ************************************ 00:06:18.127 END TEST event 00:06:18.127 ************************************ 00:06:18.127 08:05:00 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.127 08:05:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.127 08:05:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.127 08:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:18.127 ************************************ 00:06:18.127 START TEST thread 00:06:18.127 ************************************ 00:06:18.127 08:05:00 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.386 * Looking for test storage... 00:06:18.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.386 08:05:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.386 08:05:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.386 08:05:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.386 08:05:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.386 08:05:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.386 08:05:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.386 08:05:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.386 08:05:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.386 08:05:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.386 08:05:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.386 08:05:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.386 08:05:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:18.386 08:05:00 thread -- scripts/common.sh@345 -- # : 1 00:06:18.386 08:05:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.386 08:05:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.386 08:05:00 thread -- scripts/common.sh@365 -- # decimal 1 00:06:18.386 08:05:00 thread -- scripts/common.sh@353 -- # local d=1 00:06:18.386 08:05:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.386 08:05:00 thread -- scripts/common.sh@355 -- # echo 1 00:06:18.386 08:05:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.386 08:05:00 thread -- scripts/common.sh@366 -- # decimal 2 00:06:18.386 08:05:00 thread -- scripts/common.sh@353 -- # local d=2 00:06:18.386 08:05:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.386 08:05:00 thread -- scripts/common.sh@355 -- # echo 2 00:06:18.386 08:05:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.386 08:05:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.386 08:05:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.386 08:05:00 thread -- scripts/common.sh@368 -- # return 0 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.386 --rc genhtml_branch_coverage=1 00:06:18.386 --rc genhtml_function_coverage=1 00:06:18.386 --rc genhtml_legend=1 00:06:18.386 --rc geninfo_all_blocks=1 00:06:18.386 --rc geninfo_unexecuted_blocks=1 00:06:18.386 00:06:18.386 ' 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.386 --rc genhtml_branch_coverage=1 00:06:18.386 --rc genhtml_function_coverage=1 00:06:18.386 --rc genhtml_legend=1 00:06:18.386 --rc geninfo_all_blocks=1 00:06:18.386 --rc geninfo_unexecuted_blocks=1 00:06:18.386 00:06:18.386 ' 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.386 --rc genhtml_branch_coverage=1 00:06:18.386 --rc genhtml_function_coverage=1 00:06:18.386 --rc genhtml_legend=1 00:06:18.386 --rc geninfo_all_blocks=1 00:06:18.386 --rc geninfo_unexecuted_blocks=1 00:06:18.386 00:06:18.386 ' 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.386 --rc genhtml_branch_coverage=1 00:06:18.386 --rc genhtml_function_coverage=1 00:06:18.386 --rc genhtml_legend=1 00:06:18.386 --rc geninfo_all_blocks=1 00:06:18.386 --rc geninfo_unexecuted_blocks=1 00:06:18.386 00:06:18.386 ' 00:06:18.386 08:05:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.386 08:05:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.386 ************************************ 00:06:18.386 START TEST thread_poller_perf 00:06:18.386 ************************************ 00:06:18.386 08:05:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.387 [2024-11-28 08:05:00.545703] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:18.387 [2024-11-28 08:05:00.545769] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181593 ] 00:06:18.387 [2024-11-28 08:05:00.612497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.387 [2024-11-28 08:05:00.653660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.387 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:19.787 [2024-11-28T07:05:02.056Z] ====================================== 00:06:19.787 [2024-11-28T07:05:02.056Z] busy:2305932014 (cyc) 00:06:19.787 [2024-11-28T07:05:02.056Z] total_run_count: 402000 00:06:19.787 [2024-11-28T07:05:02.056Z] tsc_hz: 2300000000 (cyc) 00:06:19.787 [2024-11-28T07:05:02.056Z] ====================================== 00:06:19.787 [2024-11-28T07:05:02.056Z] poller_cost: 5736 (cyc), 2493 (nsec) 00:06:19.787 00:06:19.787 real 0m1.176s 00:06:19.787 user 0m1.101s 00:06:19.787 sys 0m0.072s 00:06:19.787 08:05:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.787 08:05:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.787 ************************************ 00:06:19.787 END TEST thread_poller_perf 00:06:19.787 ************************************ 00:06:19.787 08:05:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:19.787 08:05:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:19.787 08:05:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.787 08:05:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.787 ************************************ 00:06:19.787 START TEST thread_poller_perf 00:06:19.787 ************************************ 00:06:19.787 08:05:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:19.787 [2024-11-28 08:05:01.789666] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:19.787 [2024-11-28 08:05:01.789724] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181839 ] 00:06:19.787 [2024-11-28 08:05:01.853135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.787 [2024-11-28 08:05:01.894753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.787 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:20.724 [2024-11-28T07:05:02.993Z] ====================================== 00:06:20.724 [2024-11-28T07:05:02.993Z] busy:2301696524 (cyc) 00:06:20.724 [2024-11-28T07:05:02.993Z] total_run_count: 5395000 00:06:20.724 [2024-11-28T07:05:02.993Z] tsc_hz: 2300000000 (cyc) 00:06:20.724 [2024-11-28T07:05:02.993Z] ====================================== 00:06:20.724 [2024-11-28T07:05:02.993Z] poller_cost: 426 (cyc), 185 (nsec) 00:06:20.724 00:06:20.724 real 0m1.164s 00:06:20.724 user 0m1.091s 00:06:20.724 sys 0m0.069s 00:06:20.724 08:05:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.724 08:05:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.724 ************************************ 00:06:20.724 END TEST thread_poller_perf 00:06:20.724 ************************************ 00:06:20.724 08:05:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:20.724 00:06:20.724 real 0m2.648s 00:06:20.724 user 0m2.351s 00:06:20.724 sys 0m0.308s 00:06:20.724 08:05:02 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.724 08:05:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.724 ************************************ 00:06:20.724 END TEST thread 00:06:20.724 ************************************ 00:06:20.985 08:05:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:20.985 08:05:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:20.985 08:05:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.985 08:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.985 08:05:02 -- common/autotest_common.sh@10 -- # set +x 00:06:20.985 ************************************ 00:06:20.985 START TEST app_cmdline 00:06:20.985 ************************************ 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:20.985 * Looking for test storage... 00:06:20.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.985 08:05:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.985 --rc genhtml_branch_coverage=1 00:06:20.985 --rc genhtml_function_coverage=1 00:06:20.985 --rc genhtml_legend=1 00:06:20.985 --rc geninfo_all_blocks=1 00:06:20.985 --rc geninfo_unexecuted_blocks=1 00:06:20.985 00:06:20.985 ' 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.985 --rc genhtml_branch_coverage=1 00:06:20.985 --rc genhtml_function_coverage=1 00:06:20.985 --rc genhtml_legend=1 00:06:20.985 --rc geninfo_all_blocks=1 00:06:20.985 --rc geninfo_unexecuted_blocks=1 00:06:20.985 00:06:20.985 ' 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.985 --rc genhtml_branch_coverage=1 00:06:20.985 --rc genhtml_function_coverage=1 00:06:20.985 --rc genhtml_legend=1 00:06:20.985 --rc geninfo_all_blocks=1 00:06:20.985 --rc geninfo_unexecuted_blocks=1 00:06:20.985 00:06:20.985 ' 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.985 --rc genhtml_branch_coverage=1 00:06:20.985 --rc genhtml_function_coverage=1 00:06:20.985 --rc genhtml_legend=1 00:06:20.985 --rc geninfo_all_blocks=1 00:06:20.985 --rc geninfo_unexecuted_blocks=1 00:06:20.985 00:06:20.985 ' 00:06:20.985 08:05:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:20.985 08:05:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1182144 00:06:20.985 08:05:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:20.985 08:05:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1182144 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1182144 ']' 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.985 08:05:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.985 [2024-11-28 08:05:03.244117] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:20.985 [2024-11-28 08:05:03.244167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182144 ] 00:06:21.245 [2024-11-28 08:05:03.305874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.245 [2024-11-28 08:05:03.348564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.506 08:05:03 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.506 08:05:03 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:21.506 08:05:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:21.506 { 00:06:21.506 "version": "SPDK v25.01-pre git sha1 27aaaa748", 00:06:21.506 "fields": { 00:06:21.506 "major": 25, 00:06:21.506 "minor": 1, 00:06:21.506 "patch": 0, 00:06:21.506 "suffix": "-pre", 00:06:21.506 "commit": "27aaaa748" 00:06:21.506 } 00:06:21.506 } 00:06:21.506 08:05:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:21.506 08:05:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:21.506 08:05:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:21.506 08:05:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:21.506 08:05:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:21.506 08:05:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:21.506 08:05:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:21.506 08:05:03 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.506 08:05:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.506 08:05:03 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.765 08:05:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:21.765 08:05:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:21.765 08:05:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.765 08:05:03 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:21.765 08:05:03 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.765 08:05:03 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:21.765 08:05:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.765 08:05:03 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:21.765 08:05:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.765 08:05:03 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:21.765 08:05:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.765 08:05:03 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:21.765 08:05:03 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:21.765 08:05:03 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.765 request: 00:06:21.765 { 00:06:21.765 "method": "env_dpdk_get_mem_stats", 00:06:21.765 "req_id": 1 00:06:21.765 } 00:06:21.765 Got JSON-RPC error response 00:06:21.765 response: 00:06:21.765 { 00:06:21.765 "code": -32601, 00:06:21.766 "message": "Method not found" 00:06:21.766 } 00:06:21.766 08:05:03 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:21.766 08:05:03 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.766 08:05:03 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.766 08:05:03 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.766 08:05:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1182144 00:06:21.766 08:05:03 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1182144 ']' 00:06:21.766 08:05:03 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1182144 00:06:21.766 08:05:03 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:21.766 08:05:03 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.766 08:05:03 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1182144 00:06:21.766 08:05:04 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.766 08:05:04 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.766 08:05:04 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1182144' 00:06:21.766 killing process with pid 1182144 00:06:21.766 08:05:04 app_cmdline -- common/autotest_common.sh@973 -- # kill 1182144 00:06:21.766 08:05:04 app_cmdline -- common/autotest_common.sh@978 -- # wait 1182144 00:06:22.334 00:06:22.334 real 0m1.299s 00:06:22.334 user 0m1.533s 00:06:22.334 sys 0m0.427s 00:06:22.334 08:05:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.334 08:05:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.334 ************************************ 00:06:22.334 END TEST app_cmdline 00:06:22.334 ************************************ 00:06:22.334 08:05:04 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:22.334 08:05:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.334 08:05:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.334 08:05:04 -- common/autotest_common.sh@10 -- # set +x 00:06:22.334 ************************************ 00:06:22.334 START TEST version 00:06:22.334 ************************************ 00:06:22.334 08:05:04 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:22.334 * Looking for test storage... 00:06:22.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:22.334 08:05:04 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.334 08:05:04 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.334 08:05:04 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.334 08:05:04 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.334 08:05:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.334 08:05:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.334 08:05:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.334 08:05:04 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.334 08:05:04 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.334 08:05:04 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.334 08:05:04 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.334 08:05:04 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.334 08:05:04 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.334 08:05:04 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.334 08:05:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.334 08:05:04 version -- scripts/common.sh@344 -- # case "$op" in 00:06:22.334 08:05:04 version -- scripts/common.sh@345 -- # : 1 00:06:22.334 08:05:04 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.334 08:05:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.334 08:05:04 version -- scripts/common.sh@365 -- # decimal 1 00:06:22.334 08:05:04 version -- scripts/common.sh@353 -- # local d=1 00:06:22.334 08:05:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.334 08:05:04 version -- scripts/common.sh@355 -- # echo 1 00:06:22.334 08:05:04 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.334 08:05:04 version -- scripts/common.sh@366 -- # decimal 2 00:06:22.334 08:05:04 version -- scripts/common.sh@353 -- # local d=2 00:06:22.334 08:05:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.334 08:05:04 version -- scripts/common.sh@355 -- # echo 2 00:06:22.334 08:05:04 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.334 08:05:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.334 08:05:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.334 08:05:04 version -- scripts/common.sh@368 -- # return 0 00:06:22.334 08:05:04 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.334 08:05:04 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.334 --rc genhtml_branch_coverage=1 00:06:22.334 --rc genhtml_function_coverage=1 00:06:22.334 --rc genhtml_legend=1 00:06:22.334 --rc geninfo_all_blocks=1 00:06:22.334 --rc geninfo_unexecuted_blocks=1 00:06:22.334 00:06:22.334 ' 00:06:22.334 08:05:04 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.334 --rc genhtml_branch_coverage=1 00:06:22.334 --rc genhtml_function_coverage=1 00:06:22.334 --rc genhtml_legend=1 00:06:22.334 --rc geninfo_all_blocks=1 00:06:22.334 --rc geninfo_unexecuted_blocks=1 00:06:22.334 00:06:22.334 ' 00:06:22.334 08:05:04 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.334 --rc genhtml_branch_coverage=1 00:06:22.334 --rc genhtml_function_coverage=1 00:06:22.334 --rc genhtml_legend=1 00:06:22.334 --rc geninfo_all_blocks=1 00:06:22.334 --rc geninfo_unexecuted_blocks=1 00:06:22.334 00:06:22.334 ' 00:06:22.334 08:05:04 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.334 --rc genhtml_branch_coverage=1 00:06:22.334 --rc genhtml_function_coverage=1 00:06:22.334 --rc genhtml_legend=1 00:06:22.334 --rc geninfo_all_blocks=1 00:06:22.334 --rc geninfo_unexecuted_blocks=1 00:06:22.334 00:06:22.334 ' 00:06:22.334 08:05:04 version -- app/version.sh@17 -- # get_header_version major 00:06:22.334 08:05:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:22.334 08:05:04 version -- app/version.sh@14 -- # cut -f2 00:06:22.334 08:05:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.334 08:05:04 version -- app/version.sh@17 -- # major=25 00:06:22.334 08:05:04 version -- app/version.sh@18 -- # get_header_version minor 00:06:22.334 08:05:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:22.334 08:05:04 version -- app/version.sh@14 -- # cut -f2 00:06:22.334 08:05:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.334 08:05:04 version -- app/version.sh@18 -- # minor=1 00:06:22.334 08:05:04 version -- app/version.sh@19 -- # get_header_version patch 00:06:22.334 08:05:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:22.334 08:05:04 version -- app/version.sh@14 -- # cut -f2 00:06:22.334 08:05:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.334 08:05:04 version -- app/version.sh@19 -- # patch=0 00:06:22.334 08:05:04 version -- app/version.sh@20 -- # get_header_version suffix 00:06:22.334 08:05:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:22.334 08:05:04 version -- app/version.sh@14 -- # cut -f2 00:06:22.334 08:05:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.334 08:05:04 version -- app/version.sh@20 -- # suffix=-pre 00:06:22.334 08:05:04 version -- app/version.sh@22 -- # version=25.1 00:06:22.334 08:05:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:22.334 08:05:04 version -- app/version.sh@28 -- # version=25.1rc0 00:06:22.334 08:05:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:22.334 08:05:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:22.595 08:05:04 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:22.595 08:05:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:22.595 00:06:22.595 real 0m0.228s 00:06:22.595 user 0m0.149s 00:06:22.595 sys 0m0.120s 00:06:22.595 08:05:04 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.595 08:05:04 version -- common/autotest_common.sh@10 -- # set +x 00:06:22.595 ************************************ 00:06:22.595 END TEST version 00:06:22.595 ************************************ 00:06:22.595 08:05:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:22.595 08:05:04 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:22.595 08:05:04 -- spdk/autotest.sh@194 -- # uname -s 00:06:22.595 08:05:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:22.595 08:05:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:22.595 08:05:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:22.595 08:05:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:22.595 08:05:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:22.595 08:05:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:22.595 08:05:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.595 08:05:04 -- common/autotest_common.sh@10 -- # set +x 00:06:22.595 08:05:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:22.595 08:05:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:22.595 08:05:04 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:22.595 08:05:04 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:22.595 08:05:04 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:22.595 08:05:04 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:22.595 08:05:04 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:22.595 08:05:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:22.595 08:05:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.595 08:05:04 -- common/autotest_common.sh@10 -- # set +x 00:06:22.595 ************************************ 00:06:22.595 START TEST nvmf_tcp 00:06:22.595 ************************************ 00:06:22.595 08:05:04 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:22.595 * Looking for test storage... 00:06:22.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:22.595 08:05:04 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.595 08:05:04 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.595 08:05:04 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.856 08:05:04 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.856 08:05:04 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:22.856 08:05:04 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.856 08:05:04 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.856 --rc genhtml_branch_coverage=1 00:06:22.856 --rc genhtml_function_coverage=1 00:06:22.856 --rc genhtml_legend=1 00:06:22.856 --rc geninfo_all_blocks=1 00:06:22.856 --rc geninfo_unexecuted_blocks=1 00:06:22.856 00:06:22.856 ' 00:06:22.856 08:05:04 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.856 --rc genhtml_branch_coverage=1 00:06:22.857 --rc genhtml_function_coverage=1 00:06:22.857 --rc genhtml_legend=1 00:06:22.857 --rc geninfo_all_blocks=1 00:06:22.857 --rc geninfo_unexecuted_blocks=1 00:06:22.857 00:06:22.857 ' 00:06:22.857 08:05:04 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.857 --rc genhtml_branch_coverage=1 00:06:22.857 --rc genhtml_function_coverage=1 00:06:22.857 --rc genhtml_legend=1 00:06:22.857 --rc geninfo_all_blocks=1 00:06:22.857 --rc geninfo_unexecuted_blocks=1 00:06:22.857 00:06:22.857 ' 00:06:22.857 08:05:04 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.857 --rc genhtml_branch_coverage=1 00:06:22.857 --rc genhtml_function_coverage=1 00:06:22.857 --rc genhtml_legend=1 00:06:22.857 --rc geninfo_all_blocks=1 00:06:22.857 --rc geninfo_unexecuted_blocks=1 00:06:22.857 00:06:22.857 ' 00:06:22.857 08:05:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:22.857 08:05:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:22.857 08:05:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:22.857 08:05:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:22.857 08:05:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.857 08:05:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.857 ************************************ 00:06:22.857 START TEST nvmf_target_core 00:06:22.857 ************************************ 00:06:22.857 08:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:22.857 * Looking for test storage... 00:06:22.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.857 --rc genhtml_branch_coverage=1 00:06:22.857 --rc genhtml_function_coverage=1 00:06:22.857 --rc genhtml_legend=1 00:06:22.857 --rc geninfo_all_blocks=1 00:06:22.857 --rc geninfo_unexecuted_blocks=1 00:06:22.857 00:06:22.857 ' 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.857 --rc genhtml_branch_coverage=1 00:06:22.857 --rc genhtml_function_coverage=1 00:06:22.857 --rc genhtml_legend=1 00:06:22.857 --rc geninfo_all_blocks=1 00:06:22.857 --rc geninfo_unexecuted_blocks=1 00:06:22.857 00:06:22.857 ' 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.857 --rc genhtml_branch_coverage=1 00:06:22.857 --rc genhtml_function_coverage=1 00:06:22.857 --rc genhtml_legend=1 00:06:22.857 --rc geninfo_all_blocks=1 00:06:22.857 --rc geninfo_unexecuted_blocks=1 00:06:22.857 00:06:22.857 ' 00:06:22.857 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.857 --rc genhtml_branch_coverage=1 00:06:22.857 --rc genhtml_function_coverage=1 00:06:22.857 --rc genhtml_legend=1 00:06:22.857 --rc geninfo_all_blocks=1 00:06:22.857 --rc geninfo_unexecuted_blocks=1 00:06:22.857 00:06:22.857 ' 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.118 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.119 ************************************ 00:06:23.119 START TEST nvmf_abort 00:06:23.119 ************************************ 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:23.119 * Looking for test storage... 00:06:23.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.119 --rc genhtml_branch_coverage=1 00:06:23.119 --rc genhtml_function_coverage=1 00:06:23.119 --rc genhtml_legend=1 00:06:23.119 --rc geninfo_all_blocks=1 00:06:23.119 --rc geninfo_unexecuted_blocks=1 00:06:23.119 00:06:23.119 ' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.119 --rc genhtml_branch_coverage=1 00:06:23.119 --rc genhtml_function_coverage=1 00:06:23.119 --rc genhtml_legend=1 00:06:23.119 --rc geninfo_all_blocks=1 00:06:23.119 --rc geninfo_unexecuted_blocks=1 00:06:23.119 00:06:23.119 ' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.119 --rc genhtml_branch_coverage=1 00:06:23.119 --rc genhtml_function_coverage=1 00:06:23.119 --rc genhtml_legend=1 00:06:23.119 --rc geninfo_all_blocks=1 00:06:23.119 --rc geninfo_unexecuted_blocks=1 00:06:23.119 00:06:23.119 ' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.119 --rc genhtml_branch_coverage=1 00:06:23.119 --rc genhtml_function_coverage=1 00:06:23.119 --rc genhtml_legend=1 00:06:23.119 --rc geninfo_all_blocks=1 00:06:23.119 --rc geninfo_unexecuted_blocks=1 00:06:23.119 00:06:23.119 ' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.119 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.120 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.380 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:23.380 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:23.380 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:23.380 08:05:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.677 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:28.677 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:28.677 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:28.677 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:28.677 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:28.677 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:28.677 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:28.677 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:28.678 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:28.678 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:28.678 Found net devices under 0000:86:00.0: cvl_0_0 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:28.678 Found net devices under 0000:86:00.1: cvl_0_1 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.678 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.938 08:05:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:28.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:06:28.938 00:06:28.938 --- 10.0.0.2 ping statistics --- 00:06:28.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.938 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:06:28.938 00:06:28.938 --- 10.0.0.1 ping statistics --- 00:06:28.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.938 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1185829 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1185829 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1185829 ']' 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.938 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.198 [2024-11-28 08:05:11.211877] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:29.198 [2024-11-28 08:05:11.211925] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.198 [2024-11-28 08:05:11.278963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.198 [2024-11-28 08:05:11.321258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.198 [2024-11-28 08:05:11.321293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.198 [2024-11-28 08:05:11.321300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.198 [2024-11-28 08:05:11.321306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.198 [2024-11-28 08:05:11.321311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.198 [2024-11-28 08:05:11.322688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.198 [2024-11-28 08:05:11.322760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.198 [2024-11-28 08:05:11.322761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.198 [2024-11-28 08:05:11.459391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.198 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.458 Malloc0 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.458 Delay0 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.458 [2024-11-28 08:05:11.523946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.458 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:29.458 [2024-11-28 08:05:11.649714] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:31.998 Initializing NVMe Controllers 00:06:31.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:31.999 controller IO queue size 128 less than required 00:06:31.999 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:31.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:31.999 Initialization complete. Launching workers. 00:06:31.999 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36579 00:06:31.999 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36640, failed to submit 62 00:06:31.999 success 36583, unsuccessful 57, failed 0 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:31.999 rmmod nvme_tcp 00:06:31.999 rmmod nvme_fabrics 00:06:31.999 rmmod nvme_keyring 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1185829 ']' 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1185829 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1185829 ']' 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1185829 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1185829 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1185829' 00:06:31.999 killing process with pid 1185829 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1185829 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1185829 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:31.999 08:05:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.999 08:05:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.999 08:05:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.925 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:33.925 00:06:33.925 real 0m10.875s 00:06:33.925 user 0m11.380s 00:06:33.925 sys 0m5.196s 00:06:33.925 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.925 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.925 ************************************ 00:06:33.925 END TEST nvmf_abort 00:06:33.925 ************************************ 00:06:33.925 08:05:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:33.925 08:05:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.925 08:05:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.925 08:05:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.925 ************************************ 00:06:33.925 START TEST nvmf_ns_hotplug_stress 00:06:33.925 ************************************ 00:06:33.925 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:34.185 * Looking for test storage... 00:06:34.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.185 --rc genhtml_branch_coverage=1 00:06:34.185 --rc genhtml_function_coverage=1 00:06:34.185 --rc genhtml_legend=1 00:06:34.185 --rc geninfo_all_blocks=1 00:06:34.185 --rc geninfo_unexecuted_blocks=1 00:06:34.185 00:06:34.185 ' 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.185 --rc genhtml_branch_coverage=1 00:06:34.185 --rc genhtml_function_coverage=1 00:06:34.185 --rc genhtml_legend=1 00:06:34.185 --rc geninfo_all_blocks=1 00:06:34.185 --rc geninfo_unexecuted_blocks=1 00:06:34.185 00:06:34.185 ' 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.185 --rc genhtml_branch_coverage=1 00:06:34.185 --rc genhtml_function_coverage=1 00:06:34.185 --rc genhtml_legend=1 00:06:34.185 --rc geninfo_all_blocks=1 00:06:34.185 --rc geninfo_unexecuted_blocks=1 00:06:34.185 00:06:34.185 ' 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.185 --rc genhtml_branch_coverage=1 00:06:34.185 --rc genhtml_function_coverage=1 00:06:34.185 --rc genhtml_legend=1 00:06:34.185 --rc geninfo_all_blocks=1 00:06:34.185 --rc geninfo_unexecuted_blocks=1 00:06:34.185 00:06:34.185 ' 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.185 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:34.186 08:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:39.468 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:39.468 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:39.468 Found net devices under 0000:86:00.0: cvl_0_0 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:39.468 Found net devices under 0000:86:00.1: cvl_0_1 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:39.468 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:39.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:06:39.469 00:06:39.469 --- 10.0.0.2 ping statistics --- 00:06:39.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.469 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:39.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:06:39.469 00:06:39.469 --- 10.0.0.1 ping statistics --- 00:06:39.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.469 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1189778 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1189778 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1189778 ']' 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.469 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:39.469 [2024-11-28 08:05:21.576277] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:06:39.469 [2024-11-28 08:05:21.576320] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.469 [2024-11-28 08:05:21.641515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.469 [2024-11-28 08:05:21.684444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.469 [2024-11-28 08:05:21.684477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.469 [2024-11-28 08:05:21.684485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.469 [2024-11-28 08:05:21.684491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.469 [2024-11-28 08:05:21.684496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.469 [2024-11-28 08:05:21.685850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.469 [2024-11-28 08:05:21.685939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.469 [2024-11-28 08:05:21.685941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.730 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.730 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:39.730 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:39.730 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.730 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.730 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.730 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:39.730 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:39.730 [2024-11-28 08:05:21.992661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.989 08:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:39.989 08:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:40.249 [2024-11-28 08:05:22.390115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.249 08:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:40.509 08:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:40.768 Malloc0 00:06:40.768 08:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:40.768 Delay0 00:06:41.028 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.028 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:41.287 NULL1 00:06:41.287 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:41.547 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1190114 00:06:41.547 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:41.547 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:41.547 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.807 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.807 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:41.807 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:42.067 true 00:06:42.067 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:42.067 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.327 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.588 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:42.588 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:42.848 true 00:06:42.848 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:42.848 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.108 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.108 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:43.108 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:43.367 true 00:06:43.367 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:43.367 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.627 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.886 08:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:43.886 08:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:44.146 true 00:06:44.146 08:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:44.146 08:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.146 08:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.405 08:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:44.405 08:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:44.664 true 00:06:44.664 08:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:44.664 08:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.923 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.183 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:45.183 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:45.183 true 00:06:45.443 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:45.443 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.443 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.702 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:45.702 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:45.962 true 00:06:45.962 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:45.962 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.222 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.482 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:46.482 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:46.482 true 00:06:46.482 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:46.482 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.742 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.002 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:47.002 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:47.261 true 00:06:47.261 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:47.261 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.521 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.521 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:47.782 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:47.782 true 00:06:47.782 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:47.782 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.043 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.303 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:48.303 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:48.563 true 00:06:48.563 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:48.563 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.823 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.823 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:48.823 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:49.082 true 00:06:49.082 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:49.082 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.342 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.602 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:49.602 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:49.861 true 00:06:49.861 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:49.861 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.121 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.121 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:50.121 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:50.380 true 00:06:50.380 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:50.380 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.639 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.897 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:50.897 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:51.156 true 00:06:51.156 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:51.156 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.415 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.415 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:51.415 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:51.673 true 00:06:51.673 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:51.673 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.931 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.190 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:52.190 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:52.448 true 00:06:52.448 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:52.448 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.707 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.985 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:52.986 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:52.986 true 00:06:52.986 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:52.986 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.245 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.505 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:53.505 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:53.765 true 00:06:53.765 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:53.765 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.024 08:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.024 08:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:54.024 08:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:54.284 true 00:06:54.284 08:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:54.284 08:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.544 08:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.805 08:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:54.805 08:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:55.065 true 00:06:55.065 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:55.065 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.065 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.324 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:55.324 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:55.584 true 00:06:55.584 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:55.584 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.843 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.104 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:56.104 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:56.104 true 00:06:56.364 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:56.364 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.364 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.624 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:56.624 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:56.883 true 00:06:56.883 08:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:56.883 08:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.142 08:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.402 08:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:57.402 08:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:57.402 true 00:06:57.661 08:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:57.661 08:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.661 08:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.921 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:57.921 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:58.181 true 00:06:58.181 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:58.181 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.441 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.701 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:58.701 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:58.961 true 00:06:58.961 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:58.961 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.961 08:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.221 08:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:59.221 08:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:59.481 true 00:06:59.481 08:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:06:59.481 08:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.741 08:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.001 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:00.001 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:00.001 true 00:07:00.260 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:00.260 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.260 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.550 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:00.550 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:00.809 true 00:07:00.809 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:00.809 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.069 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.328 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:01.328 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:01.328 true 00:07:01.328 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:01.328 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.588 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.848 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:01.848 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:02.107 true 00:07:02.107 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:02.107 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.367 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.367 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:02.367 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:02.626 true 00:07:02.626 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:02.626 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.886 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.144 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:03.144 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:03.402 true 00:07:03.402 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:03.402 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.662 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.662 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:03.662 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:03.921 true 00:07:03.921 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:03.921 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.182 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.442 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:04.442 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:04.701 true 00:07:04.701 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:04.701 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.701 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.961 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:04.961 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:05.221 true 00:07:05.221 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:05.221 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.481 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.741 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:05.741 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:05.741 true 00:07:06.001 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:06.001 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.001 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.262 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:06.262 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:06.522 true 00:07:06.522 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:06.522 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.782 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.042 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:07.042 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:07.042 true 00:07:07.042 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:07.042 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.302 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.562 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:07.562 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:07.822 true 00:07:07.822 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:07.822 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.081 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.341 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:08.341 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:08.341 true 00:07:08.341 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:08.341 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.600 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.860 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:08.860 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:09.120 true 00:07:09.120 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:09.120 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.381 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.381 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:09.381 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:09.640 true 00:07:09.640 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:09.640 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.899 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.159 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:10.159 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:10.419 true 00:07:10.419 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:10.419 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.419 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.679 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:10.679 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:10.939 true 00:07:10.939 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:10.939 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.199 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.459 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:11.459 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:11.719 true 00:07:11.719 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:11.719 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.719 Initializing NVMe Controllers 00:07:11.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:11.719 Controller IO queue size 128, less than required. 00:07:11.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:11.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:11.719 Initialization complete. Launching workers. 00:07:11.719 ======================================================== 00:07:11.719 Latency(us) 00:07:11.719 Device Information : IOPS MiB/s Average min max 00:07:11.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26593.13 12.98 4813.22 2835.63 8132.35 00:07:11.719 ======================================================== 00:07:11.719 Total : 26593.13 12.98 4813.22 2835.63 8132.35 00:07:11.719 00:07:11.719 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.980 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:11.980 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:12.241 true 00:07:12.241 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1190114 00:07:12.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1190114) - No such process 00:07:12.241 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1190114 00:07:12.241 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.501 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.501 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:12.501 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:12.501 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:12.501 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:12.501 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:12.761 null0 00:07:12.761 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:12.761 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:12.761 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:13.021 null1 00:07:13.021 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.021 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.021 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:13.279 null2 00:07:13.279 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.279 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.279 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:13.279 null3 00:07:13.279 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.279 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.279 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:13.538 null4 00:07:13.539 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.539 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.539 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:13.799 null5 00:07:13.799 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.799 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.799 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:14.058 null6 00:07:14.058 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:14.058 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:14.058 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:14.319 null7 00:07:14.319 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:14.319 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:14.319 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1195776 1195777 1195779 1195781 1195783 1195785 1195787 1195789 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.320 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.580 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.581 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.581 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.581 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.581 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.581 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.581 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.581 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.581 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.581 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.840 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.841 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.841 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.841 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.841 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.841 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.841 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.841 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.101 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.361 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.620 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.620 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.620 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.620 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.620 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.620 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.621 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.880 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.880 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.880 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.880 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.880 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.881 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:16.140 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:16.140 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.140 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:16.140 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.140 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:16.140 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:16.140 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.140 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:16.401 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.662 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:16.921 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.921 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:16.921 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:16.921 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:16.921 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.921 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:16.921 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:16.921 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.180 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.181 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.181 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.181 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.181 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.441 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:17.441 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:17.441 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.441 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:17.441 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.441 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:17.441 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.441 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.441 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.441 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.441 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.702 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.962 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.221 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.221 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.221 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.221 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.221 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.221 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.221 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.221 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:18.480 rmmod nvme_tcp 00:07:18.480 rmmod nvme_fabrics 00:07:18.480 rmmod nvme_keyring 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1189778 ']' 00:07:18.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1189778 00:07:18.481 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1189778 ']' 00:07:18.481 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1189778 00:07:18.481 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:18.481 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.481 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1189778 00:07:18.481 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:18.481 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:18.481 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1189778' 00:07:18.481 killing process with pid 1189778 00:07:18.481 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1189778 00:07:18.481 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1189778 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.284 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:21.284 00:07:21.284 real 0m46.822s 00:07:21.284 user 3m22.623s 00:07:21.284 sys 0m16.814s 00:07:21.284 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.284 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:21.284 ************************************ 00:07:21.284 END TEST nvmf_ns_hotplug_stress 00:07:21.284 ************************************ 00:07:21.284 08:06:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:21.284 08:06:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.284 08:06:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.284 08:06:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.284 ************************************ 00:07:21.284 START TEST nvmf_delete_subsystem 00:07:21.284 ************************************ 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:21.284 * Looking for test storage... 00:07:21.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.284 --rc genhtml_branch_coverage=1 00:07:21.284 --rc genhtml_function_coverage=1 00:07:21.284 --rc genhtml_legend=1 00:07:21.284 --rc geninfo_all_blocks=1 00:07:21.284 --rc geninfo_unexecuted_blocks=1 00:07:21.284 00:07:21.284 ' 00:07:21.284 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.285 --rc genhtml_branch_coverage=1 00:07:21.285 --rc genhtml_function_coverage=1 00:07:21.285 --rc genhtml_legend=1 00:07:21.285 --rc geninfo_all_blocks=1 00:07:21.285 --rc geninfo_unexecuted_blocks=1 00:07:21.285 00:07:21.285 ' 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.285 --rc genhtml_branch_coverage=1 00:07:21.285 --rc genhtml_function_coverage=1 00:07:21.285 --rc genhtml_legend=1 00:07:21.285 --rc geninfo_all_blocks=1 00:07:21.285 --rc geninfo_unexecuted_blocks=1 00:07:21.285 00:07:21.285 ' 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.285 --rc genhtml_branch_coverage=1 00:07:21.285 --rc genhtml_function_coverage=1 00:07:21.285 --rc genhtml_legend=1 00:07:21.285 --rc geninfo_all_blocks=1 00:07:21.285 --rc geninfo_unexecuted_blocks=1 00:07:21.285 00:07:21.285 ' 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.285 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:26.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.571 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:26.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:26.572 Found net devices under 0000:86:00.0: cvl_0_0 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:26.572 Found net devices under 0000:86:00.1: cvl_0_1 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.572 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.833 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.833 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.833 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:26.833 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.833 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:26.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:07:26.833 00:07:26.833 --- 10.0.0.2 ping statistics --- 00:07:26.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.833 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:07:26.833 00:07:26.833 --- 10.0.0.1 ping statistics --- 00:07:26.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.833 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1200477 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1200477 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1200477 ']' 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.833 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.094 [2024-11-28 08:06:09.122280] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:07:27.094 [2024-11-28 08:06:09.122335] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.094 [2024-11-28 08:06:09.190650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.094 [2024-11-28 08:06:09.234491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.094 [2024-11-28 08:06:09.234528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.094 [2024-11-28 08:06:09.234535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.094 [2024-11-28 08:06:09.234541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.094 [2024-11-28 08:06:09.234547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.094 [2024-11-28 08:06:09.235706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.094 [2024-11-28 08:06:09.235710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.094 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.094 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:27.094 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:27.094 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.094 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 [2024-11-28 08:06:09.373681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 [2024-11-28 08:06:09.389858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 NULL1 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 Delay0 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1200663 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:27.355 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:27.355 [2024-11-28 08:06:09.474567] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:29.265 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.265 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.265 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Write completed with error (sct=0, sc=8) 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.526 starting I/O failed: -6 00:07:29.526 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 [2024-11-28 08:06:11.595942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172e4a0 is same with the state(6) to be set 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 starting I/O failed: -6 00:07:29.527 [2024-11-28 08:06:11.596446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9414000c40 is same with the state(6) to be set 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Write completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:29.527 Read completed with error (sct=0, sc=8) 00:07:30.468 [2024-11-28 08:06:12.570210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172f9b0 is same with the state(6) to be set 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 [2024-11-28 08:06:12.598543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172e680 is same with the state(6) to be set 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 [2024-11-28 08:06:12.598729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172e2c0 is same with the state(6) to be set 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.468 Read completed with error (sct=0, sc=8) 00:07:30.468 Write completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 [2024-11-28 08:06:12.599016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f941400d020 is same with the state(6) to be set 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 Read completed with error (sct=0, sc=8) 00:07:30.469 Write completed with error (sct=0, sc=8) 00:07:30.469 [2024-11-28 08:06:12.599684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f941400d800 is same with the state(6) to be set 00:07:30.469 Initializing NVMe Controllers 00:07:30.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:30.469 Controller IO queue size 128, less than required. 00:07:30.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:30.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:30.469 Initialization complete. Launching workers. 00:07:30.469 ======================================================== 00:07:30.469 Latency(us) 00:07:30.469 Device Information : IOPS MiB/s Average min max 00:07:30.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.17 0.09 898543.69 393.24 1011956.20 00:07:30.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.82 0.08 939519.14 278.35 2001686.71 00:07:30.469 ======================================================== 00:07:30.469 Total : 352.99 0.17 917792.36 278.35 2001686.71 00:07:30.469 00:07:30.469 [2024-11-28 08:06:12.600286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172f9b0 (9): Bad file descriptor 00:07:30.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:30.469 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.469 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:30.469 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1200663 00:07:30.469 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1200663 00:07:31.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1200663) - No such process 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1200663 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1200663 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1200663 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.039 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.040 [2024-11-28 08:06:13.127361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1201402 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1201402 00:07:31.040 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:31.040 [2024-11-28 08:06:13.196799] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:31.610 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.610 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1201402 00:07:31.610 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.180 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.180 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1201402 00:07:32.180 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.440 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.440 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1201402 00:07:32.440 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.010 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.010 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1201402 00:07:33.010 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.579 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.579 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1201402 00:07:33.579 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.149 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.149 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1201402 00:07:34.149 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.149 Initializing NVMe Controllers 00:07:34.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:34.149 Controller IO queue size 128, less than required. 00:07:34.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:34.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:34.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:34.149 Initialization complete. Launching workers. 00:07:34.149 ======================================================== 00:07:34.149 Latency(us) 00:07:34.149 Device Information : IOPS MiB/s Average min max 00:07:34.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003666.98 1000149.77 1011694.67 00:07:34.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005068.00 1000149.27 1012891.43 00:07:34.150 ======================================================== 00:07:34.150 Total : 256.00 0.12 1004367.49 1000149.27 1012891.43 00:07:34.150 00:07:34.410 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.410 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1201402 00:07:34.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1201402) - No such process 00:07:34.410 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1201402 00:07:34.410 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:34.410 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:34.410 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:34.410 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:34.669 rmmod nvme_tcp 00:07:34.669 rmmod nvme_fabrics 00:07:34.669 rmmod nvme_keyring 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1200477 ']' 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1200477 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1200477 ']' 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1200477 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1200477 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1200477' 00:07:34.669 killing process with pid 1200477 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1200477 00:07:34.669 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1200477 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.929 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.841 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:36.841 00:07:36.841 real 0m16.000s 00:07:36.841 user 0m28.949s 00:07:36.841 sys 0m5.474s 00:07:36.841 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.841 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.841 ************************************ 00:07:36.841 END TEST nvmf_delete_subsystem 00:07:36.841 ************************************ 00:07:36.841 08:06:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:36.841 08:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.841 08:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.841 08:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:36.841 ************************************ 00:07:36.841 START TEST nvmf_host_management 00:07:36.841 ************************************ 00:07:36.841 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:37.103 * Looking for test storage... 00:07:37.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.103 --rc genhtml_branch_coverage=1 00:07:37.103 --rc genhtml_function_coverage=1 00:07:37.103 --rc genhtml_legend=1 00:07:37.103 --rc geninfo_all_blocks=1 00:07:37.103 --rc geninfo_unexecuted_blocks=1 00:07:37.103 00:07:37.103 ' 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.103 --rc genhtml_branch_coverage=1 00:07:37.103 --rc genhtml_function_coverage=1 00:07:37.103 --rc genhtml_legend=1 00:07:37.103 --rc geninfo_all_blocks=1 00:07:37.103 --rc geninfo_unexecuted_blocks=1 00:07:37.103 00:07:37.103 ' 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.103 --rc genhtml_branch_coverage=1 00:07:37.103 --rc genhtml_function_coverage=1 00:07:37.103 --rc genhtml_legend=1 00:07:37.103 --rc geninfo_all_blocks=1 00:07:37.103 --rc geninfo_unexecuted_blocks=1 00:07:37.103 00:07:37.103 ' 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.103 --rc genhtml_branch_coverage=1 00:07:37.103 --rc genhtml_function_coverage=1 00:07:37.103 --rc genhtml_legend=1 00:07:37.103 --rc geninfo_all_blocks=1 00:07:37.103 --rc geninfo_unexecuted_blocks=1 00:07:37.103 00:07:37.103 ' 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.103 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:37.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:37.104 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:43.687 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:43.687 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:43.687 Found net devices under 0000:86:00.0: cvl_0_0 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:43.687 Found net devices under 0000:86:00.1: cvl_0_1 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:43.687 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.687 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.687 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.687 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:43.687 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:43.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:07:43.687 00:07:43.687 --- 10.0.0.2 ping statistics --- 00:07:43.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.687 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:07:43.688 00:07:43.688 --- 10.0.0.1 ping statistics --- 00:07:43.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.688 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1205626 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1205626 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1205626 ']' 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.688 [2024-11-28 08:06:25.196015] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:07:43.688 [2024-11-28 08:06:25.196065] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.688 [2024-11-28 08:06:25.263008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.688 [2024-11-28 08:06:25.305861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.688 [2024-11-28 08:06:25.305903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.688 [2024-11-28 08:06:25.305912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.688 [2024-11-28 08:06:25.305920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.688 [2024-11-28 08:06:25.305926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.688 [2024-11-28 08:06:25.307466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.688 [2024-11-28 08:06:25.307553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.688 [2024-11-28 08:06:25.307663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:43.688 [2024-11-28 08:06:25.307663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.688 [2024-11-28 08:06:25.441324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.688 Malloc0 00:07:43.688 [2024-11-28 08:06:25.510956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1205672 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1205672 /var/tmp/bdevperf.sock 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1205672 ']' 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:43.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:43.688 { 00:07:43.688 "params": { 00:07:43.688 "name": "Nvme$subsystem", 00:07:43.688 "trtype": "$TEST_TRANSPORT", 00:07:43.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:43.688 "adrfam": "ipv4", 00:07:43.688 "trsvcid": "$NVMF_PORT", 00:07:43.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:43.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:43.688 "hdgst": ${hdgst:-false}, 00:07:43.688 "ddgst": ${ddgst:-false} 00:07:43.688 }, 00:07:43.688 "method": "bdev_nvme_attach_controller" 00:07:43.688 } 00:07:43.688 EOF 00:07:43.688 )") 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:43.688 "params": { 00:07:43.688 "name": "Nvme0", 00:07:43.688 "trtype": "tcp", 00:07:43.688 "traddr": "10.0.0.2", 00:07:43.688 "adrfam": "ipv4", 00:07:43.688 "trsvcid": "4420", 00:07:43.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:43.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:43.688 "hdgst": false, 00:07:43.688 "ddgst": false 00:07:43.688 }, 00:07:43.688 "method": "bdev_nvme_attach_controller" 00:07:43.688 }' 00:07:43.688 [2024-11-28 08:06:25.606521] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:07:43.688 [2024-11-28 08:06:25.606567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205672 ] 00:07:43.688 [2024-11-28 08:06:25.670371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.688 [2024-11-28 08:06:25.711636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.688 Running I/O for 10 seconds... 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.688 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.689 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.949 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.949 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:07:43.949 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:07:43.949 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:43.949 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:43.949 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=641 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 641 -ge 100 ']' 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.211 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.211 [2024-11-28 08:06:26.269471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.211 [2024-11-28 08:06:26.269795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d90b0 is same with the state(6) to be set 00:07:44.212 [2024-11-28 08:06:26.269975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.212 [2024-11-28 08:06:26.270488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.212 [2024-11-28 08:06:26.270494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.213 [2024-11-28 08:06:26.270985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:44.213 [2024-11-28 08:06:26.270993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x813430 is same with the state(6) to be set 00:07:44.213 [2024-11-28 08:06:26.271974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:44.213 task offset: 90112 on job bdev=Nvme0n1 fails 00:07:44.213 00:07:44.213 Latency(us) 00:07:44.213 [2024-11-28T07:06:26.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.213 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:44.213 Job: Nvme0n1 ended in about 0.39 seconds with error 00:07:44.213 Verification LBA range: start 0x0 length 0x400 00:07:44.213 Nvme0n1 : 0.39 1793.57 112.10 163.05 0.00 31812.27 3789.69 27810.06 00:07:44.213 [2024-11-28T07:06:26.482Z] =================================================================================================================== 00:07:44.213 [2024-11-28T07:06:26.482Z] Total : 1793.57 112.10 163.05 0.00 31812.27 3789.69 27810.06 00:07:44.213 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.213 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:44.213 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.213 [2024-11-28 08:06:26.274423] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.213 [2024-11-28 08:06:26.274446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fa510 (9): Bad file descriptor 00:07:44.213 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.213 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.213 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:44.214 [2024-11-28 08:06:26.368048] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:45.154 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1205672 00:07:45.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1205672) - No such process 00:07:45.154 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:45.154 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:45.154 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:45.154 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:45.154 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:45.154 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:45.154 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:45.154 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:45.154 { 00:07:45.154 "params": { 00:07:45.154 "name": "Nvme$subsystem", 00:07:45.154 "trtype": "$TEST_TRANSPORT", 00:07:45.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.154 "adrfam": "ipv4", 00:07:45.154 "trsvcid": "$NVMF_PORT", 00:07:45.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.154 "hdgst": ${hdgst:-false}, 00:07:45.154 "ddgst": ${ddgst:-false} 00:07:45.154 }, 00:07:45.154 "method": "bdev_nvme_attach_controller" 00:07:45.154 } 00:07:45.155 EOF 00:07:45.155 )") 00:07:45.155 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:45.155 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:45.155 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:45.155 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:45.155 "params": { 00:07:45.155 "name": "Nvme0", 00:07:45.155 "trtype": "tcp", 00:07:45.155 "traddr": "10.0.0.2", 00:07:45.155 "adrfam": "ipv4", 00:07:45.155 "trsvcid": "4420", 00:07:45.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:45.155 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:45.155 "hdgst": false, 00:07:45.155 "ddgst": false 00:07:45.155 }, 00:07:45.155 "method": "bdev_nvme_attach_controller" 00:07:45.155 }' 00:07:45.155 [2024-11-28 08:06:27.335307] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:07:45.155 [2024-11-28 08:06:27.335352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206005 ] 00:07:45.155 [2024-11-28 08:06:27.398085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.414 [2024-11-28 08:06:27.439273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.675 Running I/O for 1 seconds... 00:07:46.617 1920.00 IOPS, 120.00 MiB/s 00:07:46.618 Latency(us) 00:07:46.618 [2024-11-28T07:06:28.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.618 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:46.618 Verification LBA range: start 0x0 length 0x400 00:07:46.618 Nvme0n1 : 1.01 1966.38 122.90 0.00 0.00 32035.49 7579.38 27696.08 00:07:46.618 [2024-11-28T07:06:28.887Z] =================================================================================================================== 00:07:46.618 [2024-11-28T07:06:28.887Z] Total : 1966.38 122.90 0.00 0.00 32035.49 7579.38 27696.08 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:46.878 rmmod nvme_tcp 00:07:46.878 rmmod nvme_fabrics 00:07:46.878 rmmod nvme_keyring 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1205626 ']' 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1205626 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1205626 ']' 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1205626 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.878 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1205626 00:07:46.878 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:46.878 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:46.878 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1205626' 00:07:46.878 killing process with pid 1205626 00:07:46.878 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1205626 00:07:46.878 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1205626 00:07:47.139 [2024-11-28 08:06:29.169910] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.139 08:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.052 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:49.052 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:49.052 00:07:49.052 real 0m12.169s 00:07:49.052 user 0m19.450s 00:07:49.052 sys 0m5.403s 00:07:49.052 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.052 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.052 ************************************ 00:07:49.052 END TEST nvmf_host_management 00:07:49.052 ************************************ 00:07:49.052 08:06:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:49.052 08:06:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.052 08:06:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.052 08:06:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.312 ************************************ 00:07:49.312 START TEST nvmf_lvol 00:07:49.312 ************************************ 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:49.312 * Looking for test storage... 00:07:49.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.312 --rc genhtml_branch_coverage=1 00:07:49.312 --rc genhtml_function_coverage=1 00:07:49.312 --rc genhtml_legend=1 00:07:49.312 --rc geninfo_all_blocks=1 00:07:49.312 --rc geninfo_unexecuted_blocks=1 00:07:49.312 00:07:49.312 ' 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.312 --rc genhtml_branch_coverage=1 00:07:49.312 --rc genhtml_function_coverage=1 00:07:49.312 --rc genhtml_legend=1 00:07:49.312 --rc geninfo_all_blocks=1 00:07:49.312 --rc geninfo_unexecuted_blocks=1 00:07:49.312 00:07:49.312 ' 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.312 --rc genhtml_branch_coverage=1 00:07:49.312 --rc genhtml_function_coverage=1 00:07:49.312 --rc genhtml_legend=1 00:07:49.312 --rc geninfo_all_blocks=1 00:07:49.312 --rc geninfo_unexecuted_blocks=1 00:07:49.312 00:07:49.312 ' 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.312 --rc genhtml_branch_coverage=1 00:07:49.312 --rc genhtml_function_coverage=1 00:07:49.312 --rc genhtml_legend=1 00:07:49.312 --rc geninfo_all_blocks=1 00:07:49.312 --rc geninfo_unexecuted_blocks=1 00:07:49.312 00:07:49.312 ' 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:49.312 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:49.313 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:54.599 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:54.599 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.599 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:54.600 Found net devices under 0000:86:00.0: cvl_0_0 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:54.600 Found net devices under 0000:86:00.1: cvl_0_1 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.600 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:54.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:07:54.860 00:07:54.860 --- 10.0.0.2 ping statistics --- 00:07:54.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.860 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:07:54.860 00:07:54.860 --- 10.0.0.1 ping statistics --- 00:07:54.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.860 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:54.860 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:54.860 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1209845 00:07:54.860 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1209845 00:07:54.860 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1209845 ']' 00:07:54.860 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.860 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.860 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.860 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.860 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 [2024-11-28 08:06:37.043372] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:07:54.860 [2024-11-28 08:06:37.043418] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.860 [2024-11-28 08:06:37.107283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.120 [2024-11-28 08:06:37.150627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.120 [2024-11-28 08:06:37.150664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.120 [2024-11-28 08:06:37.150671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.120 [2024-11-28 08:06:37.150677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.120 [2024-11-28 08:06:37.150681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.120 [2024-11-28 08:06:37.152087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.120 [2024-11-28 08:06:37.152119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.120 [2024-11-28 08:06:37.152122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.120 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.120 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:55.120 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.120 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:55.120 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:55.120 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.120 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:55.380 [2024-11-28 08:06:37.459221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.380 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:55.640 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:55.640 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:55.640 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:55.640 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:55.900 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:56.160 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c92cf5ce-1e38-43b9-b69e-366eb0d69dde 00:07:56.160 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c92cf5ce-1e38-43b9-b69e-366eb0d69dde lvol 20 00:07:56.420 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a8688865-00d0-40bf-b63a-5f08793da033 00:07:56.420 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:56.680 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a8688865-00d0-40bf-b63a-5f08793da033 00:07:56.680 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:56.940 [2024-11-28 08:06:39.092737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.940 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.200 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1210191 00:07:57.200 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:57.200 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:58.140 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a8688865-00d0-40bf-b63a-5f08793da033 MY_SNAPSHOT 00:07:58.400 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=96c9068d-e7aa-4274-931f-37805309d13c 00:07:58.400 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a8688865-00d0-40bf-b63a-5f08793da033 30 00:07:58.659 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 96c9068d-e7aa-4274-931f-37805309d13c MY_CLONE 00:07:58.918 08:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a92223d8-7248-4bdd-be95-afc160909a4d 00:07:58.918 08:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a92223d8-7248-4bdd-be95-afc160909a4d 00:07:59.488 08:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1210191 00:08:07.624 Initializing NVMe Controllers 00:08:07.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:07.624 Controller IO queue size 128, less than required. 00:08:07.624 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:07.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:07.624 Initialization complete. Launching workers. 00:08:07.624 ======================================================== 00:08:07.624 Latency(us) 00:08:07.624 Device Information : IOPS MiB/s Average min max 00:08:07.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12062.10 47.12 10613.10 1598.03 51288.41 00:08:07.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11875.40 46.39 10777.18 3425.63 59985.78 00:08:07.624 ======================================================== 00:08:07.624 Total : 23937.50 93.51 10694.50 1598.03 59985.78 00:08:07.624 00:08:07.624 08:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:07.624 08:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a8688865-00d0-40bf-b63a-5f08793da033 00:08:07.884 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c92cf5ce-1e38-43b9-b69e-366eb0d69dde 00:08:08.143 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:08.144 rmmod nvme_tcp 00:08:08.144 rmmod nvme_fabrics 00:08:08.144 rmmod nvme_keyring 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1209845 ']' 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1209845 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1209845 ']' 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1209845 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1209845 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1209845' 00:08:08.144 killing process with pid 1209845 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1209845 00:08:08.144 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1209845 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.404 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:10.947 00:08:10.947 real 0m21.340s 00:08:10.947 user 1m2.785s 00:08:10.947 sys 0m7.197s 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.947 ************************************ 00:08:10.947 END TEST nvmf_lvol 00:08:10.947 ************************************ 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:10.947 ************************************ 00:08:10.947 START TEST nvmf_lvs_grow 00:08:10.947 ************************************ 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:10.947 * Looking for test storage... 00:08:10.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.947 --rc genhtml_branch_coverage=1 00:08:10.947 --rc genhtml_function_coverage=1 00:08:10.947 --rc genhtml_legend=1 00:08:10.947 --rc geninfo_all_blocks=1 00:08:10.947 --rc geninfo_unexecuted_blocks=1 00:08:10.947 00:08:10.947 ' 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.947 --rc genhtml_branch_coverage=1 00:08:10.947 --rc genhtml_function_coverage=1 00:08:10.947 --rc genhtml_legend=1 00:08:10.947 --rc geninfo_all_blocks=1 00:08:10.947 --rc geninfo_unexecuted_blocks=1 00:08:10.947 00:08:10.947 ' 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.947 --rc genhtml_branch_coverage=1 00:08:10.947 --rc genhtml_function_coverage=1 00:08:10.947 --rc genhtml_legend=1 00:08:10.947 --rc geninfo_all_blocks=1 00:08:10.947 --rc geninfo_unexecuted_blocks=1 00:08:10.947 00:08:10.947 ' 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.947 --rc genhtml_branch_coverage=1 00:08:10.947 --rc genhtml_function_coverage=1 00:08:10.947 --rc genhtml_legend=1 00:08:10.947 --rc geninfo_all_blocks=1 00:08:10.947 --rc geninfo_unexecuted_blocks=1 00:08:10.947 00:08:10.947 ' 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.947 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:10.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:10.948 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:16.230 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:16.230 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.230 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:16.231 Found net devices under 0000:86:00.0: cvl_0_0 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:16.231 Found net devices under 0000:86:00.1: cvl_0_1 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:08:16.231 00:08:16.231 --- 10.0.0.2 ping statistics --- 00:08:16.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.231 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:08:16.231 00:08:16.231 --- 10.0.0.1 ping statistics --- 00:08:16.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.231 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1215562 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1215562 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1215562 ']' 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.231 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.231 [2024-11-28 08:06:58.487322] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:08:16.231 [2024-11-28 08:06:58.487370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.492 [2024-11-28 08:06:58.553037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.492 [2024-11-28 08:06:58.594673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.492 [2024-11-28 08:06:58.594707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.492 [2024-11-28 08:06:58.594714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.492 [2024-11-28 08:06:58.594720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.492 [2024-11-28 08:06:58.594725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.492 [2024-11-28 08:06:58.595293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.492 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.492 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:16.492 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.492 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.492 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.492 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.492 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:16.753 [2024-11-28 08:06:58.897085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.753 ************************************ 00:08:16.753 START TEST lvs_grow_clean 00:08:16.753 ************************************ 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.753 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.013 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:17.013 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:17.274 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:17.274 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:17.274 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:17.534 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:17.534 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:17.534 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e lvol 150 00:08:17.534 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b727f8d2-07d3-4c55-8715-789caeac40bd 00:08:17.534 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.534 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:17.794 [2024-11-28 08:06:59.946888] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:17.794 [2024-11-28 08:06:59.946939] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:17.794 true 00:08:17.794 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:17.794 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:18.054 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:18.054 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:18.314 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b727f8d2-07d3-4c55-8715-789caeac40bd 00:08:18.314 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:18.574 [2024-11-28 08:07:00.721228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.574 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:18.834 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1216068 00:08:18.834 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:18.834 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:18.834 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1216068 /var/tmp/bdevperf.sock 00:08:18.834 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1216068 ']' 00:08:18.834 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:18.834 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.834 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:18.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:18.834 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.834 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:18.834 [2024-11-28 08:07:00.973454] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:08:18.834 [2024-11-28 08:07:00.973516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216068 ] 00:08:18.834 [2024-11-28 08:07:01.034156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.834 [2024-11-28 08:07:01.074428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.094 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.094 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:19.094 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:19.354 Nvme0n1 00:08:19.354 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:19.614 [ 00:08:19.614 { 00:08:19.614 "name": "Nvme0n1", 00:08:19.614 "aliases": [ 00:08:19.614 "b727f8d2-07d3-4c55-8715-789caeac40bd" 00:08:19.614 ], 00:08:19.614 "product_name": "NVMe disk", 00:08:19.614 "block_size": 4096, 00:08:19.614 "num_blocks": 38912, 00:08:19.614 "uuid": "b727f8d2-07d3-4c55-8715-789caeac40bd", 00:08:19.614 "numa_id": 1, 00:08:19.614 "assigned_rate_limits": { 00:08:19.614 "rw_ios_per_sec": 0, 00:08:19.614 "rw_mbytes_per_sec": 0, 00:08:19.614 "r_mbytes_per_sec": 0, 00:08:19.614 "w_mbytes_per_sec": 0 00:08:19.614 }, 00:08:19.614 "claimed": false, 00:08:19.614 "zoned": false, 00:08:19.614 "supported_io_types": { 00:08:19.614 "read": true, 00:08:19.614 "write": true, 00:08:19.614 "unmap": true, 00:08:19.614 "flush": true, 00:08:19.614 "reset": true, 00:08:19.615 "nvme_admin": true, 00:08:19.615 "nvme_io": true, 00:08:19.615 "nvme_io_md": false, 00:08:19.615 "write_zeroes": true, 00:08:19.615 "zcopy": false, 00:08:19.615 "get_zone_info": false, 00:08:19.615 "zone_management": false, 00:08:19.615 "zone_append": false, 00:08:19.615 "compare": true, 00:08:19.615 "compare_and_write": true, 00:08:19.615 "abort": true, 00:08:19.615 "seek_hole": false, 00:08:19.615 "seek_data": false, 00:08:19.615 "copy": true, 00:08:19.615 "nvme_iov_md": false 00:08:19.615 }, 00:08:19.615 "memory_domains": [ 00:08:19.615 { 00:08:19.615 "dma_device_id": "system", 00:08:19.615 "dma_device_type": 1 00:08:19.615 } 00:08:19.615 ], 00:08:19.615 "driver_specific": { 00:08:19.615 "nvme": [ 00:08:19.615 { 00:08:19.615 "trid": { 00:08:19.615 "trtype": "TCP", 00:08:19.615 "adrfam": "IPv4", 00:08:19.615 "traddr": "10.0.0.2", 00:08:19.615 "trsvcid": "4420", 00:08:19.615 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:19.615 }, 00:08:19.615 "ctrlr_data": { 00:08:19.615 "cntlid": 1, 00:08:19.615 "vendor_id": "0x8086", 00:08:19.615 "model_number": "SPDK bdev Controller", 00:08:19.615 "serial_number": "SPDK0", 00:08:19.615 "firmware_revision": "25.01", 00:08:19.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:19.615 "oacs": { 00:08:19.615 "security": 0, 00:08:19.615 "format": 0, 00:08:19.615 "firmware": 0, 00:08:19.615 "ns_manage": 0 00:08:19.615 }, 00:08:19.615 "multi_ctrlr": true, 00:08:19.615 "ana_reporting": false 00:08:19.615 }, 00:08:19.615 "vs": { 00:08:19.615 "nvme_version": "1.3" 00:08:19.615 }, 00:08:19.615 "ns_data": { 00:08:19.615 "id": 1, 00:08:19.615 "can_share": true 00:08:19.615 } 00:08:19.615 } 00:08:19.615 ], 00:08:19.615 "mp_policy": "active_passive" 00:08:19.615 } 00:08:19.615 } 00:08:19.615 ] 00:08:19.615 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1216173 00:08:19.615 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:19.615 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:19.615 Running I/O for 10 seconds... 00:08:20.555 Latency(us) 00:08:20.555 [2024-11-28T07:07:02.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.555 Nvme0n1 : 1.00 22560.00 88.12 0.00 0.00 0.00 0.00 0.00 00:08:20.555 [2024-11-28T07:07:02.824Z] =================================================================================================================== 00:08:20.555 [2024-11-28T07:07:02.824Z] Total : 22560.00 88.12 0.00 0.00 0.00 0.00 0.00 00:08:20.555 00:08:21.495 08:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:21.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.754 Nvme0n1 : 2.00 22699.00 88.67 0.00 0.00 0.00 0.00 0.00 00:08:21.754 [2024-11-28T07:07:04.023Z] =================================================================================================================== 00:08:21.754 [2024-11-28T07:07:04.023Z] Total : 22699.00 88.67 0.00 0.00 0.00 0.00 0.00 00:08:21.754 00:08:21.754 true 00:08:21.754 08:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:21.754 08:07:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:22.014 08:07:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:22.014 08:07:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:22.014 08:07:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1216173 00:08:22.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.583 Nvme0n1 : 3.00 22745.67 88.85 0.00 0.00 0.00 0.00 0.00 00:08:22.583 [2024-11-28T07:07:04.852Z] =================================================================================================================== 00:08:22.583 [2024-11-28T07:07:04.852Z] Total : 22745.67 88.85 0.00 0.00 0.00 0.00 0.00 00:08:22.583 00:08:23.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.522 Nvme0n1 : 4.00 22798.50 89.06 0.00 0.00 0.00 0.00 0.00 00:08:23.522 [2024-11-28T07:07:05.791Z] =================================================================================================================== 00:08:23.522 [2024-11-28T07:07:05.792Z] Total : 22798.50 89.06 0.00 0.00 0.00 0.00 0.00 00:08:23.523 00:08:24.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.904 Nvme0n1 : 5.00 22853.00 89.27 0.00 0.00 0.00 0.00 0.00 00:08:24.904 [2024-11-28T07:07:07.173Z] =================================================================================================================== 00:08:24.904 [2024-11-28T07:07:07.174Z] Total : 22853.00 89.27 0.00 0.00 0.00 0.00 0.00 00:08:24.905 00:08:25.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.844 Nvme0n1 : 6.00 22875.33 89.36 0.00 0.00 0.00 0.00 0.00 00:08:25.844 [2024-11-28T07:07:08.113Z] =================================================================================================================== 00:08:25.844 [2024-11-28T07:07:08.113Z] Total : 22875.33 89.36 0.00 0.00 0.00 0.00 0.00 00:08:25.844 00:08:26.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.785 Nvme0n1 : 7.00 22909.43 89.49 0.00 0.00 0.00 0.00 0.00 00:08:26.785 [2024-11-28T07:07:09.054Z] =================================================================================================================== 00:08:26.785 [2024-11-28T07:07:09.054Z] Total : 22909.43 89.49 0.00 0.00 0.00 0.00 0.00 00:08:26.785 00:08:27.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.725 Nvme0n1 : 8.00 22923.25 89.54 0.00 0.00 0.00 0.00 0.00 00:08:27.725 [2024-11-28T07:07:09.994Z] =================================================================================================================== 00:08:27.725 [2024-11-28T07:07:09.994Z] Total : 22923.25 89.54 0.00 0.00 0.00 0.00 0.00 00:08:27.725 00:08:28.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.663 Nvme0n1 : 9.00 22883.56 89.39 0.00 0.00 0.00 0.00 0.00 00:08:28.663 [2024-11-28T07:07:10.932Z] =================================================================================================================== 00:08:28.663 [2024-11-28T07:07:10.932Z] Total : 22883.56 89.39 0.00 0.00 0.00 0.00 0.00 00:08:28.663 00:08:29.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.603 Nvme0n1 : 10.00 22896.60 89.44 0.00 0.00 0.00 0.00 0.00 00:08:29.603 [2024-11-28T07:07:11.873Z] =================================================================================================================== 00:08:29.604 [2024-11-28T07:07:11.873Z] Total : 22896.60 89.44 0.00 0.00 0.00 0.00 0.00 00:08:29.604 00:08:29.604 00:08:29.604 Latency(us) 00:08:29.604 [2024-11-28T07:07:11.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.604 Nvme0n1 : 10.00 22898.84 89.45 0.00 0.00 5586.73 3419.27 10542.75 00:08:29.604 [2024-11-28T07:07:11.873Z] =================================================================================================================== 00:08:29.604 [2024-11-28T07:07:11.873Z] Total : 22898.84 89.45 0.00 0.00 5586.73 3419.27 10542.75 00:08:29.604 { 00:08:29.604 "results": [ 00:08:29.604 { 00:08:29.604 "job": "Nvme0n1", 00:08:29.604 "core_mask": "0x2", 00:08:29.604 "workload": "randwrite", 00:08:29.604 "status": "finished", 00:08:29.604 "queue_depth": 128, 00:08:29.604 "io_size": 4096, 00:08:29.604 "runtime": 10.00461, 00:08:29.604 "iops": 22898.843633085147, 00:08:29.604 "mibps": 89.44860794173886, 00:08:29.604 "io_failed": 0, 00:08:29.604 "io_timeout": 0, 00:08:29.604 "avg_latency_us": 5586.732522613654, 00:08:29.604 "min_latency_us": 3419.269565217391, 00:08:29.604 "max_latency_us": 10542.747826086956 00:08:29.604 } 00:08:29.604 ], 00:08:29.604 "core_count": 1 00:08:29.604 } 00:08:29.604 08:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1216068 00:08:29.604 08:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1216068 ']' 00:08:29.604 08:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1216068 00:08:29.604 08:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:29.604 08:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.604 08:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1216068 00:08:29.864 08:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:29.864 08:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:29.864 08:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1216068' 00:08:29.864 killing process with pid 1216068 00:08:29.864 08:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1216068 00:08:29.864 Received shutdown signal, test time was about 10.000000 seconds 00:08:29.864 00:08:29.864 Latency(us) 00:08:29.864 [2024-11-28T07:07:12.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.864 [2024-11-28T07:07:12.133Z] =================================================================================================================== 00:08:29.864 [2024-11-28T07:07:12.133Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:29.864 08:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1216068 00:08:29.864 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.123 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.383 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:30.383 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:30.383 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:30.383 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:30.383 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.643 [2024-11-28 08:07:12.796428] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:30.643 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:30.903 request: 00:08:30.903 { 00:08:30.903 "uuid": "a45a3c93-0bd9-46db-8857-dd9a3b3e390e", 00:08:30.903 "method": "bdev_lvol_get_lvstores", 00:08:30.903 "req_id": 1 00:08:30.903 } 00:08:30.903 Got JSON-RPC error response 00:08:30.903 response: 00:08:30.903 { 00:08:30.903 "code": -19, 00:08:30.903 "message": "No such device" 00:08:30.903 } 00:08:30.903 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:30.903 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.903 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:30.903 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.903 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.163 aio_bdev 00:08:31.163 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b727f8d2-07d3-4c55-8715-789caeac40bd 00:08:31.163 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b727f8d2-07d3-4c55-8715-789caeac40bd 00:08:31.163 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.163 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:31.163 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.163 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.163 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:31.163 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b727f8d2-07d3-4c55-8715-789caeac40bd -t 2000 00:08:31.423 [ 00:08:31.423 { 00:08:31.423 "name": "b727f8d2-07d3-4c55-8715-789caeac40bd", 00:08:31.423 "aliases": [ 00:08:31.423 "lvs/lvol" 00:08:31.423 ], 00:08:31.423 "product_name": "Logical Volume", 00:08:31.423 "block_size": 4096, 00:08:31.423 "num_blocks": 38912, 00:08:31.423 "uuid": "b727f8d2-07d3-4c55-8715-789caeac40bd", 00:08:31.423 "assigned_rate_limits": { 00:08:31.423 "rw_ios_per_sec": 0, 00:08:31.423 "rw_mbytes_per_sec": 0, 00:08:31.423 "r_mbytes_per_sec": 0, 00:08:31.423 "w_mbytes_per_sec": 0 00:08:31.423 }, 00:08:31.423 "claimed": false, 00:08:31.423 "zoned": false, 00:08:31.423 "supported_io_types": { 00:08:31.423 "read": true, 00:08:31.423 "write": true, 00:08:31.423 "unmap": true, 00:08:31.423 "flush": false, 00:08:31.423 "reset": true, 00:08:31.423 "nvme_admin": false, 00:08:31.423 "nvme_io": false, 00:08:31.423 "nvme_io_md": false, 00:08:31.423 "write_zeroes": true, 00:08:31.423 "zcopy": false, 00:08:31.423 "get_zone_info": false, 00:08:31.423 "zone_management": false, 00:08:31.423 "zone_append": false, 00:08:31.423 "compare": false, 00:08:31.423 "compare_and_write": false, 00:08:31.423 "abort": false, 00:08:31.423 "seek_hole": true, 00:08:31.423 "seek_data": true, 00:08:31.423 "copy": false, 00:08:31.423 "nvme_iov_md": false 00:08:31.423 }, 00:08:31.423 "driver_specific": { 00:08:31.423 "lvol": { 00:08:31.423 "lvol_store_uuid": "a45a3c93-0bd9-46db-8857-dd9a3b3e390e", 00:08:31.423 "base_bdev": "aio_bdev", 00:08:31.423 "thin_provision": false, 00:08:31.423 "num_allocated_clusters": 38, 00:08:31.423 "snapshot": false, 00:08:31.423 "clone": false, 00:08:31.423 "esnap_clone": false 00:08:31.423 } 00:08:31.423 } 00:08:31.423 } 00:08:31.423 ] 00:08:31.423 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:31.423 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:31.423 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:31.682 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:31.682 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:31.682 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:31.942 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:31.942 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b727f8d2-07d3-4c55-8715-789caeac40bd 00:08:31.942 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a45a3c93-0bd9-46db-8857-dd9a3b3e390e 00:08:32.201 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.461 00:08:32.461 real 0m15.633s 00:08:32.461 user 0m15.133s 00:08:32.461 sys 0m1.545s 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:32.461 ************************************ 00:08:32.461 END TEST lvs_grow_clean 00:08:32.461 ************************************ 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.461 ************************************ 00:08:32.461 START TEST lvs_grow_dirty 00:08:32.461 ************************************ 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.461 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.720 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:32.720 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:32.978 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bcce84d4-449b-45b0-977b-a839c914332f 00:08:32.978 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:32.978 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:33.236 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:33.236 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:33.236 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bcce84d4-449b-45b0-977b-a839c914332f lvol 150 00:08:33.236 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=21024f50-6e60-40fd-a595-265a1debfc1d 00:08:33.236 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.236 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:33.495 [2024-11-28 08:07:15.629898] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:33.495 [2024-11-28 08:07:15.629959] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:33.495 true 00:08:33.495 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:33.495 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:33.754 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:33.754 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.754 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 21024f50-6e60-40fd-a595-265a1debfc1d 00:08:34.014 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.273 [2024-11-28 08:07:16.376113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.273 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.533 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1218670 00:08:34.533 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:34.533 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.533 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1218670 /var/tmp/bdevperf.sock 00:08:34.533 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1218670 ']' 00:08:34.533 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.533 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.533 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.533 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.533 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.533 [2024-11-28 08:07:16.611642] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:08:34.533 [2024-11-28 08:07:16.611688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218670 ] 00:08:34.533 [2024-11-28 08:07:16.673723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.533 [2024-11-28 08:07:16.714247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.793 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.793 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:34.793 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:34.793 Nvme0n1 00:08:35.053 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:35.053 [ 00:08:35.053 { 00:08:35.053 "name": "Nvme0n1", 00:08:35.053 "aliases": [ 00:08:35.053 "21024f50-6e60-40fd-a595-265a1debfc1d" 00:08:35.053 ], 00:08:35.053 "product_name": "NVMe disk", 00:08:35.053 "block_size": 4096, 00:08:35.053 "num_blocks": 38912, 00:08:35.053 "uuid": "21024f50-6e60-40fd-a595-265a1debfc1d", 00:08:35.053 "numa_id": 1, 00:08:35.053 "assigned_rate_limits": { 00:08:35.053 "rw_ios_per_sec": 0, 00:08:35.053 "rw_mbytes_per_sec": 0, 00:08:35.053 "r_mbytes_per_sec": 0, 00:08:35.053 "w_mbytes_per_sec": 0 00:08:35.053 }, 00:08:35.053 "claimed": false, 00:08:35.053 "zoned": false, 00:08:35.053 "supported_io_types": { 00:08:35.053 "read": true, 00:08:35.053 "write": true, 00:08:35.053 "unmap": true, 00:08:35.053 "flush": true, 00:08:35.053 "reset": true, 00:08:35.053 "nvme_admin": true, 00:08:35.053 "nvme_io": true, 00:08:35.053 "nvme_io_md": false, 00:08:35.053 "write_zeroes": true, 00:08:35.053 "zcopy": false, 00:08:35.053 "get_zone_info": false, 00:08:35.053 "zone_management": false, 00:08:35.053 "zone_append": false, 00:08:35.053 "compare": true, 00:08:35.053 "compare_and_write": true, 00:08:35.053 "abort": true, 00:08:35.053 "seek_hole": false, 00:08:35.053 "seek_data": false, 00:08:35.053 "copy": true, 00:08:35.053 "nvme_iov_md": false 00:08:35.053 }, 00:08:35.053 "memory_domains": [ 00:08:35.053 { 00:08:35.053 "dma_device_id": "system", 00:08:35.053 "dma_device_type": 1 00:08:35.053 } 00:08:35.053 ], 00:08:35.053 "driver_specific": { 00:08:35.053 "nvme": [ 00:08:35.053 { 00:08:35.053 "trid": { 00:08:35.053 "trtype": "TCP", 00:08:35.053 "adrfam": "IPv4", 00:08:35.053 "traddr": "10.0.0.2", 00:08:35.053 "trsvcid": "4420", 00:08:35.053 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:35.053 }, 00:08:35.053 "ctrlr_data": { 00:08:35.053 "cntlid": 1, 00:08:35.053 "vendor_id": "0x8086", 00:08:35.053 "model_number": "SPDK bdev Controller", 00:08:35.053 "serial_number": "SPDK0", 00:08:35.053 "firmware_revision": "25.01", 00:08:35.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.053 "oacs": { 00:08:35.053 "security": 0, 00:08:35.053 "format": 0, 00:08:35.053 "firmware": 0, 00:08:35.053 "ns_manage": 0 00:08:35.053 }, 00:08:35.053 "multi_ctrlr": true, 00:08:35.053 "ana_reporting": false 00:08:35.053 }, 00:08:35.053 "vs": { 00:08:35.053 "nvme_version": "1.3" 00:08:35.053 }, 00:08:35.053 "ns_data": { 00:08:35.053 "id": 1, 00:08:35.053 "can_share": true 00:08:35.053 } 00:08:35.053 } 00:08:35.053 ], 00:08:35.053 "mp_policy": "active_passive" 00:08:35.053 } 00:08:35.053 } 00:08:35.053 ] 00:08:35.053 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1218900 00:08:35.053 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:35.053 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:35.313 Running I/O for 10 seconds... 00:08:36.254 Latency(us) 00:08:36.254 [2024-11-28T07:07:18.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.254 Nvme0n1 : 1.00 22477.00 87.80 0.00 0.00 0.00 0.00 0.00 00:08:36.254 [2024-11-28T07:07:18.523Z] =================================================================================================================== 00:08:36.254 [2024-11-28T07:07:18.523Z] Total : 22477.00 87.80 0.00 0.00 0.00 0.00 0.00 00:08:36.254 00:08:37.194 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:37.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.194 Nvme0n1 : 2.00 22681.50 88.60 0.00 0.00 0.00 0.00 0.00 00:08:37.194 [2024-11-28T07:07:19.463Z] =================================================================================================================== 00:08:37.194 [2024-11-28T07:07:19.463Z] Total : 22681.50 88.60 0.00 0.00 0.00 0.00 0.00 00:08:37.194 00:08:37.194 true 00:08:37.194 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:37.194 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:37.454 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:37.454 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:37.454 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1218900 00:08:38.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.393 Nvme0n1 : 3.00 22615.67 88.34 0.00 0.00 0.00 0.00 0.00 00:08:38.393 [2024-11-28T07:07:20.662Z] =================================================================================================================== 00:08:38.393 [2024-11-28T07:07:20.662Z] Total : 22615.67 88.34 0.00 0.00 0.00 0.00 0.00 00:08:38.393 00:08:39.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.333 Nvme0n1 : 4.00 22647.50 88.47 0.00 0.00 0.00 0.00 0.00 00:08:39.333 [2024-11-28T07:07:21.602Z] =================================================================================================================== 00:08:39.333 [2024-11-28T07:07:21.602Z] Total : 22647.50 88.47 0.00 0.00 0.00 0.00 0.00 00:08:39.333 00:08:40.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.273 Nvme0n1 : 5.00 22719.40 88.75 0.00 0.00 0.00 0.00 0.00 00:08:40.273 [2024-11-28T07:07:22.542Z] =================================================================================================================== 00:08:40.273 [2024-11-28T07:07:22.542Z] Total : 22719.40 88.75 0.00 0.00 0.00 0.00 0.00 00:08:40.273 00:08:41.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.212 Nvme0n1 : 6.00 22785.83 89.01 0.00 0.00 0.00 0.00 0.00 00:08:41.212 [2024-11-28T07:07:23.481Z] =================================================================================================================== 00:08:41.212 [2024-11-28T07:07:23.481Z] Total : 22785.83 89.01 0.00 0.00 0.00 0.00 0.00 00:08:41.212 00:08:42.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.150 Nvme0n1 : 7.00 22817.57 89.13 0.00 0.00 0.00 0.00 0.00 00:08:42.150 [2024-11-28T07:07:24.419Z] =================================================================================================================== 00:08:42.150 [2024-11-28T07:07:24.419Z] Total : 22817.57 89.13 0.00 0.00 0.00 0.00 0.00 00:08:42.150 00:08:43.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.088 Nvme0n1 : 8.00 22855.88 89.28 0.00 0.00 0.00 0.00 0.00 00:08:43.088 [2024-11-28T07:07:25.357Z] =================================================================================================================== 00:08:43.088 [2024-11-28T07:07:25.357Z] Total : 22855.88 89.28 0.00 0.00 0.00 0.00 0.00 00:08:43.088 00:08:44.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.471 Nvme0n1 : 9.00 22876.22 89.36 0.00 0.00 0.00 0.00 0.00 00:08:44.471 [2024-11-28T07:07:26.740Z] =================================================================================================================== 00:08:44.471 [2024-11-28T07:07:26.740Z] Total : 22876.22 89.36 0.00 0.00 0.00 0.00 0.00 00:08:44.471 00:08:45.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.413 Nvme0n1 : 10.00 22907.50 89.48 0.00 0.00 0.00 0.00 0.00 00:08:45.413 [2024-11-28T07:07:27.682Z] =================================================================================================================== 00:08:45.413 [2024-11-28T07:07:27.682Z] Total : 22907.50 89.48 0.00 0.00 0.00 0.00 0.00 00:08:45.413 00:08:45.413 00:08:45.413 Latency(us) 00:08:45.413 [2024-11-28T07:07:27.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.413 Nvme0n1 : 10.00 22905.45 89.47 0.00 0.00 5584.52 3234.06 11055.64 00:08:45.413 [2024-11-28T07:07:27.682Z] =================================================================================================================== 00:08:45.413 [2024-11-28T07:07:27.682Z] Total : 22905.45 89.47 0.00 0.00 5584.52 3234.06 11055.64 00:08:45.413 { 00:08:45.413 "results": [ 00:08:45.413 { 00:08:45.413 "job": "Nvme0n1", 00:08:45.413 "core_mask": "0x2", 00:08:45.413 "workload": "randwrite", 00:08:45.413 "status": "finished", 00:08:45.413 "queue_depth": 128, 00:08:45.413 "io_size": 4096, 00:08:45.413 "runtime": 10.003731, 00:08:45.413 "iops": 22905.453975121884, 00:08:45.413 "mibps": 89.47442959031986, 00:08:45.413 "io_failed": 0, 00:08:45.413 "io_timeout": 0, 00:08:45.413 "avg_latency_us": 5584.522657801761, 00:08:45.413 "min_latency_us": 3234.0591304347827, 00:08:45.413 "max_latency_us": 11055.638260869566 00:08:45.413 } 00:08:45.413 ], 00:08:45.413 "core_count": 1 00:08:45.413 } 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1218670 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1218670 ']' 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1218670 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1218670 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1218670' 00:08:45.413 killing process with pid 1218670 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1218670 00:08:45.413 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.413 00:08:45.413 Latency(us) 00:08:45.413 [2024-11-28T07:07:27.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.413 [2024-11-28T07:07:27.682Z] =================================================================================================================== 00:08:45.413 [2024-11-28T07:07:27.682Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1218670 00:08:45.413 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.674 08:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.934 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:45.934 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1215562 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1215562 00:08:46.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1215562 Killed "${NVMF_APP[@]}" "$@" 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1220743 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1220743 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1220743 ']' 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.195 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.195 [2024-11-28 08:07:28.288749] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:08:46.195 [2024-11-28 08:07:28.288798] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.195 [2024-11-28 08:07:28.355297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.195 [2024-11-28 08:07:28.396385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.195 [2024-11-28 08:07:28.396422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.195 [2024-11-28 08:07:28.396429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.195 [2024-11-28 08:07:28.396435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.195 [2024-11-28 08:07:28.396440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.195 [2024-11-28 08:07:28.396991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.455 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.455 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:46.455 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:46.455 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.455 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.455 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.455 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.455 [2024-11-28 08:07:28.703791] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:46.455 [2024-11-28 08:07:28.703874] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:46.455 [2024-11-28 08:07:28.703899] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:46.715 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:46.715 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 21024f50-6e60-40fd-a595-265a1debfc1d 00:08:46.715 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=21024f50-6e60-40fd-a595-265a1debfc1d 00:08:46.715 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.715 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:46.715 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.715 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.715 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:46.715 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 21024f50-6e60-40fd-a595-265a1debfc1d -t 2000 00:08:46.975 [ 00:08:46.975 { 00:08:46.975 "name": "21024f50-6e60-40fd-a595-265a1debfc1d", 00:08:46.975 "aliases": [ 00:08:46.975 "lvs/lvol" 00:08:46.975 ], 00:08:46.975 "product_name": "Logical Volume", 00:08:46.975 "block_size": 4096, 00:08:46.975 "num_blocks": 38912, 00:08:46.975 "uuid": "21024f50-6e60-40fd-a595-265a1debfc1d", 00:08:46.975 "assigned_rate_limits": { 00:08:46.975 "rw_ios_per_sec": 0, 00:08:46.975 "rw_mbytes_per_sec": 0, 00:08:46.975 "r_mbytes_per_sec": 0, 00:08:46.975 "w_mbytes_per_sec": 0 00:08:46.975 }, 00:08:46.975 "claimed": false, 00:08:46.975 "zoned": false, 00:08:46.975 "supported_io_types": { 00:08:46.975 "read": true, 00:08:46.975 "write": true, 00:08:46.975 "unmap": true, 00:08:46.975 "flush": false, 00:08:46.975 "reset": true, 00:08:46.975 "nvme_admin": false, 00:08:46.975 "nvme_io": false, 00:08:46.975 "nvme_io_md": false, 00:08:46.975 "write_zeroes": true, 00:08:46.975 "zcopy": false, 00:08:46.975 "get_zone_info": false, 00:08:46.975 "zone_management": false, 00:08:46.975 "zone_append": false, 00:08:46.975 "compare": false, 00:08:46.975 "compare_and_write": false, 00:08:46.975 "abort": false, 00:08:46.975 "seek_hole": true, 00:08:46.975 "seek_data": true, 00:08:46.975 "copy": false, 00:08:46.975 "nvme_iov_md": false 00:08:46.975 }, 00:08:46.975 "driver_specific": { 00:08:46.975 "lvol": { 00:08:46.975 "lvol_store_uuid": "bcce84d4-449b-45b0-977b-a839c914332f", 00:08:46.975 "base_bdev": "aio_bdev", 00:08:46.975 "thin_provision": false, 00:08:46.975 "num_allocated_clusters": 38, 00:08:46.975 "snapshot": false, 00:08:46.975 "clone": false, 00:08:46.975 "esnap_clone": false 00:08:46.975 } 00:08:46.975 } 00:08:46.975 } 00:08:46.975 ] 00:08:46.975 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:46.975 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:46.975 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:47.235 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:47.235 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:47.235 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:47.235 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:47.235 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.495 [2024-11-28 08:07:29.656618] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:47.495 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:47.755 request: 00:08:47.755 { 00:08:47.755 "uuid": "bcce84d4-449b-45b0-977b-a839c914332f", 00:08:47.755 "method": "bdev_lvol_get_lvstores", 00:08:47.755 "req_id": 1 00:08:47.755 } 00:08:47.756 Got JSON-RPC error response 00:08:47.756 response: 00:08:47.756 { 00:08:47.756 "code": -19, 00:08:47.756 "message": "No such device" 00:08:47.756 } 00:08:47.756 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:47.756 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:47.756 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:47.756 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:47.756 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.017 aio_bdev 00:08:48.017 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 21024f50-6e60-40fd-a595-265a1debfc1d 00:08:48.017 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=21024f50-6e60-40fd-a595-265a1debfc1d 00:08:48.017 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.017 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:48.017 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.017 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.017 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:48.278 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 21024f50-6e60-40fd-a595-265a1debfc1d -t 2000 00:08:48.278 [ 00:08:48.278 { 00:08:48.278 "name": "21024f50-6e60-40fd-a595-265a1debfc1d", 00:08:48.278 "aliases": [ 00:08:48.278 "lvs/lvol" 00:08:48.278 ], 00:08:48.278 "product_name": "Logical Volume", 00:08:48.278 "block_size": 4096, 00:08:48.278 "num_blocks": 38912, 00:08:48.278 "uuid": "21024f50-6e60-40fd-a595-265a1debfc1d", 00:08:48.278 "assigned_rate_limits": { 00:08:48.278 "rw_ios_per_sec": 0, 00:08:48.278 "rw_mbytes_per_sec": 0, 00:08:48.278 "r_mbytes_per_sec": 0, 00:08:48.278 "w_mbytes_per_sec": 0 00:08:48.278 }, 00:08:48.278 "claimed": false, 00:08:48.278 "zoned": false, 00:08:48.278 "supported_io_types": { 00:08:48.278 "read": true, 00:08:48.278 "write": true, 00:08:48.278 "unmap": true, 00:08:48.278 "flush": false, 00:08:48.278 "reset": true, 00:08:48.278 "nvme_admin": false, 00:08:48.278 "nvme_io": false, 00:08:48.278 "nvme_io_md": false, 00:08:48.278 "write_zeroes": true, 00:08:48.278 "zcopy": false, 00:08:48.278 "get_zone_info": false, 00:08:48.278 "zone_management": false, 00:08:48.278 "zone_append": false, 00:08:48.278 "compare": false, 00:08:48.278 "compare_and_write": false, 00:08:48.278 "abort": false, 00:08:48.278 "seek_hole": true, 00:08:48.278 "seek_data": true, 00:08:48.278 "copy": false, 00:08:48.278 "nvme_iov_md": false 00:08:48.278 }, 00:08:48.278 "driver_specific": { 00:08:48.278 "lvol": { 00:08:48.278 "lvol_store_uuid": "bcce84d4-449b-45b0-977b-a839c914332f", 00:08:48.278 "base_bdev": "aio_bdev", 00:08:48.278 "thin_provision": false, 00:08:48.278 "num_allocated_clusters": 38, 00:08:48.278 "snapshot": false, 00:08:48.278 "clone": false, 00:08:48.278 "esnap_clone": false 00:08:48.278 } 00:08:48.278 } 00:08:48.278 } 00:08:48.278 ] 00:08:48.278 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:48.278 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:48.278 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:48.538 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:48.538 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:48.538 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:48.798 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:48.798 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 21024f50-6e60-40fd-a595-265a1debfc1d 00:08:49.058 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bcce84d4-449b-45b0-977b-a839c914332f 00:08:49.058 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.319 00:08:49.319 real 0m16.857s 00:08:49.319 user 0m43.520s 00:08:49.319 sys 0m3.799s 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.319 ************************************ 00:08:49.319 END TEST lvs_grow_dirty 00:08:49.319 ************************************ 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:49.319 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:49.319 nvmf_trace.0 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.580 rmmod nvme_tcp 00:08:49.580 rmmod nvme_fabrics 00:08:49.580 rmmod nvme_keyring 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1220743 ']' 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1220743 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1220743 ']' 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1220743 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1220743 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1220743' 00:08:49.580 killing process with pid 1220743 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1220743 00:08:49.580 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1220743 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.840 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.751 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.751 00:08:51.751 real 0m41.207s 00:08:51.751 user 1m4.212s 00:08:51.751 sys 0m9.899s 00:08:51.751 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.751 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.751 ************************************ 00:08:51.751 END TEST nvmf_lvs_grow 00:08:51.751 ************************************ 00:08:51.751 08:07:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:51.751 08:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:51.751 08:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.751 08:07:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.012 ************************************ 00:08:52.012 START TEST nvmf_bdev_io_wait 00:08:52.012 ************************************ 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:52.012 * Looking for test storage... 00:08:52.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.012 --rc genhtml_branch_coverage=1 00:08:52.012 --rc genhtml_function_coverage=1 00:08:52.012 --rc genhtml_legend=1 00:08:52.012 --rc geninfo_all_blocks=1 00:08:52.012 --rc geninfo_unexecuted_blocks=1 00:08:52.012 00:08:52.012 ' 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.012 --rc genhtml_branch_coverage=1 00:08:52.012 --rc genhtml_function_coverage=1 00:08:52.012 --rc genhtml_legend=1 00:08:52.012 --rc geninfo_all_blocks=1 00:08:52.012 --rc geninfo_unexecuted_blocks=1 00:08:52.012 00:08:52.012 ' 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.012 --rc genhtml_branch_coverage=1 00:08:52.012 --rc genhtml_function_coverage=1 00:08:52.012 --rc genhtml_legend=1 00:08:52.012 --rc geninfo_all_blocks=1 00:08:52.012 --rc geninfo_unexecuted_blocks=1 00:08:52.012 00:08:52.012 ' 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.012 --rc genhtml_branch_coverage=1 00:08:52.012 --rc genhtml_function_coverage=1 00:08:52.012 --rc genhtml_legend=1 00:08:52.012 --rc geninfo_all_blocks=1 00:08:52.012 --rc geninfo_unexecuted_blocks=1 00:08:52.012 00:08:52.012 ' 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.012 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:52.013 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:57.297 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:57.297 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:57.297 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:57.298 Found net devices under 0000:86:00.0: cvl_0_0 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:57.298 Found net devices under 0000:86:00.1: cvl_0_1 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.298 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:57.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:08:57.559 00:08:57.559 --- 10.0.0.2 ping statistics --- 00:08:57.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.559 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:08:57.559 00:08:57.559 --- 10.0.0.1 ping statistics --- 00:08:57.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.559 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1224804 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1224804 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1224804 ']' 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.559 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.559 [2024-11-28 08:07:39.822422] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:08:57.559 [2024-11-28 08:07:39.822478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.820 [2024-11-28 08:07:39.890923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.820 [2024-11-28 08:07:39.936122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.820 [2024-11-28 08:07:39.936160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.820 [2024-11-28 08:07:39.936168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.820 [2024-11-28 08:07:39.936174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.820 [2024-11-28 08:07:39.936180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.820 [2024-11-28 08:07:39.937707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.820 [2024-11-28 08:07:39.937821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.820 [2024-11-28 08:07:39.937906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.820 [2024-11-28 08:07:39.937908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.820 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.820 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:57.820 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.820 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.820 08:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.820 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.820 [2024-11-28 08:07:40.082441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.080 Malloc0 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.080 [2024-11-28 08:07:40.137860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1224877 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1224879 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:58.080 { 00:08:58.080 "params": { 00:08:58.080 "name": "Nvme$subsystem", 00:08:58.080 "trtype": "$TEST_TRANSPORT", 00:08:58.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.080 "adrfam": "ipv4", 00:08:58.080 "trsvcid": "$NVMF_PORT", 00:08:58.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.080 "hdgst": ${hdgst:-false}, 00:08:58.080 "ddgst": ${ddgst:-false} 00:08:58.080 }, 00:08:58.080 "method": "bdev_nvme_attach_controller" 00:08:58.080 } 00:08:58.080 EOF 00:08:58.080 )") 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:58.080 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1224882 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:58.081 { 00:08:58.081 "params": { 00:08:58.081 "name": "Nvme$subsystem", 00:08:58.081 "trtype": "$TEST_TRANSPORT", 00:08:58.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.081 "adrfam": "ipv4", 00:08:58.081 "trsvcid": "$NVMF_PORT", 00:08:58.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.081 "hdgst": ${hdgst:-false}, 00:08:58.081 "ddgst": ${ddgst:-false} 00:08:58.081 }, 00:08:58.081 "method": "bdev_nvme_attach_controller" 00:08:58.081 } 00:08:58.081 EOF 00:08:58.081 )") 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1224887 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:58.081 { 00:08:58.081 "params": { 00:08:58.081 "name": "Nvme$subsystem", 00:08:58.081 "trtype": "$TEST_TRANSPORT", 00:08:58.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.081 "adrfam": "ipv4", 00:08:58.081 "trsvcid": "$NVMF_PORT", 00:08:58.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.081 "hdgst": ${hdgst:-false}, 00:08:58.081 "ddgst": ${ddgst:-false} 00:08:58.081 }, 00:08:58.081 "method": "bdev_nvme_attach_controller" 00:08:58.081 } 00:08:58.081 EOF 00:08:58.081 )") 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:58.081 { 00:08:58.081 "params": { 00:08:58.081 "name": "Nvme$subsystem", 00:08:58.081 "trtype": "$TEST_TRANSPORT", 00:08:58.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.081 "adrfam": "ipv4", 00:08:58.081 "trsvcid": "$NVMF_PORT", 00:08:58.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.081 "hdgst": ${hdgst:-false}, 00:08:58.081 "ddgst": ${ddgst:-false} 00:08:58.081 }, 00:08:58.081 "method": "bdev_nvme_attach_controller" 00:08:58.081 } 00:08:58.081 EOF 00:08:58.081 )") 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1224877 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:58.081 "params": { 00:08:58.081 "name": "Nvme1", 00:08:58.081 "trtype": "tcp", 00:08:58.081 "traddr": "10.0.0.2", 00:08:58.081 "adrfam": "ipv4", 00:08:58.081 "trsvcid": "4420", 00:08:58.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:58.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:58.081 "hdgst": false, 00:08:58.081 "ddgst": false 00:08:58.081 }, 00:08:58.081 "method": "bdev_nvme_attach_controller" 00:08:58.081 }' 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:58.081 "params": { 00:08:58.081 "name": "Nvme1", 00:08:58.081 "trtype": "tcp", 00:08:58.081 "traddr": "10.0.0.2", 00:08:58.081 "adrfam": "ipv4", 00:08:58.081 "trsvcid": "4420", 00:08:58.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:58.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:58.081 "hdgst": false, 00:08:58.081 "ddgst": false 00:08:58.081 }, 00:08:58.081 "method": "bdev_nvme_attach_controller" 00:08:58.081 }' 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:58.081 "params": { 00:08:58.081 "name": "Nvme1", 00:08:58.081 "trtype": "tcp", 00:08:58.081 "traddr": "10.0.0.2", 00:08:58.081 "adrfam": "ipv4", 00:08:58.081 "trsvcid": "4420", 00:08:58.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:58.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:58.081 "hdgst": false, 00:08:58.081 "ddgst": false 00:08:58.081 }, 00:08:58.081 "method": "bdev_nvme_attach_controller" 00:08:58.081 }' 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:58.081 08:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:58.081 "params": { 00:08:58.081 "name": "Nvme1", 00:08:58.081 "trtype": "tcp", 00:08:58.081 "traddr": "10.0.0.2", 00:08:58.081 "adrfam": "ipv4", 00:08:58.081 "trsvcid": "4420", 00:08:58.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:58.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:58.081 "hdgst": false, 00:08:58.081 "ddgst": false 00:08:58.081 }, 00:08:58.081 "method": "bdev_nvme_attach_controller" 00:08:58.081 }' 00:08:58.081 [2024-11-28 08:07:40.190161] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:08:58.081 [2024-11-28 08:07:40.190215] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:58.081 [2024-11-28 08:07:40.191852] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:08:58.081 [2024-11-28 08:07:40.191897] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:58.081 [2024-11-28 08:07:40.193136] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:08:58.081 [2024-11-28 08:07:40.193181] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:58.081 [2024-11-28 08:07:40.195508] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:08:58.081 [2024-11-28 08:07:40.195552] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:58.342 [2024-11-28 08:07:40.379538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.342 [2024-11-28 08:07:40.422838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:58.342 [2024-11-28 08:07:40.470447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.342 [2024-11-28 08:07:40.513612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:58.342 [2024-11-28 08:07:40.571571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.602 [2024-11-28 08:07:40.632287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:58.602 [2024-11-28 08:07:40.632401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.602 [2024-11-28 08:07:40.675267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:58.602 Running I/O for 1 seconds... 00:08:58.602 Running I/O for 1 seconds... 00:08:58.602 Running I/O for 1 seconds... 00:08:58.861 Running I/O for 1 seconds... 00:08:59.801 13501.00 IOPS, 52.74 MiB/s 00:08:59.801 Latency(us) 00:08:59.801 [2024-11-28T07:07:42.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.801 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:59.801 Nvme1n1 : 1.01 13558.96 52.96 0.00 0.00 9410.66 4957.94 16754.42 00:08:59.801 [2024-11-28T07:07:42.070Z] =================================================================================================================== 00:08:59.801 [2024-11-28T07:07:42.070Z] Total : 13558.96 52.96 0.00 0.00 9410.66 4957.94 16754.42 00:08:59.801 6520.00 IOPS, 25.47 MiB/s 00:08:59.801 Latency(us) 00:08:59.801 [2024-11-28T07:07:42.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.801 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:59.801 Nvme1n1 : 1.01 6563.27 25.64 0.00 0.00 19384.02 7921.31 29633.67 00:08:59.801 [2024-11-28T07:07:42.070Z] =================================================================================================================== 00:08:59.801 [2024-11-28T07:07:42.070Z] Total : 6563.27 25.64 0.00 0.00 19384.02 7921.31 29633.67 00:08:59.801 234880.00 IOPS, 917.50 MiB/s 00:08:59.801 Latency(us) 00:08:59.801 [2024-11-28T07:07:42.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.801 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:59.801 Nvme1n1 : 1.00 234517.70 916.08 0.00 0.00 542.65 227.95 1538.67 00:08:59.801 [2024-11-28T07:07:42.070Z] =================================================================================================================== 00:08:59.801 [2024-11-28T07:07:42.070Z] Total : 234517.70 916.08 0.00 0.00 542.65 227.95 1538.67 00:08:59.801 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1224879 00:08:59.801 6873.00 IOPS, 26.85 MiB/s 00:08:59.801 Latency(us) 00:08:59.801 [2024-11-28T07:07:42.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.801 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:59.801 Nvme1n1 : 1.01 6964.14 27.20 0.00 0.00 18323.41 4843.97 43994.60 00:08:59.801 [2024-11-28T07:07:42.070Z] =================================================================================================================== 00:08:59.801 [2024-11-28T07:07:42.070Z] Total : 6964.14 27.20 0.00 0.00 18323.41 4843.97 43994.60 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1224882 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1224887 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.801 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.801 rmmod nvme_tcp 00:09:00.061 rmmod nvme_fabrics 00:09:00.061 rmmod nvme_keyring 00:09:00.061 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.061 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:00.061 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:00.061 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1224804 ']' 00:09:00.061 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1224804 00:09:00.062 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1224804 ']' 00:09:00.062 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1224804 00:09:00.062 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:00.062 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.062 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1224804 00:09:00.062 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.062 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.062 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1224804' 00:09:00.062 killing process with pid 1224804 00:09:00.062 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1224804 00:09:00.062 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1224804 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.322 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.232 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:02.232 00:09:02.232 real 0m10.358s 00:09:02.232 user 0m16.190s 00:09:02.232 sys 0m5.853s 00:09:02.232 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.232 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:02.232 ************************************ 00:09:02.232 END TEST nvmf_bdev_io_wait 00:09:02.232 ************************************ 00:09:02.232 08:07:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:02.232 08:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:02.232 08:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.232 08:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.232 ************************************ 00:09:02.232 START TEST nvmf_queue_depth 00:09:02.232 ************************************ 00:09:02.232 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:02.491 * Looking for test storage... 00:09:02.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:02.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.491 --rc genhtml_branch_coverage=1 00:09:02.491 --rc genhtml_function_coverage=1 00:09:02.491 --rc genhtml_legend=1 00:09:02.491 --rc geninfo_all_blocks=1 00:09:02.491 --rc geninfo_unexecuted_blocks=1 00:09:02.491 00:09:02.491 ' 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:02.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.491 --rc genhtml_branch_coverage=1 00:09:02.491 --rc genhtml_function_coverage=1 00:09:02.491 --rc genhtml_legend=1 00:09:02.491 --rc geninfo_all_blocks=1 00:09:02.491 --rc geninfo_unexecuted_blocks=1 00:09:02.491 00:09:02.491 ' 00:09:02.491 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:02.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.492 --rc genhtml_branch_coverage=1 00:09:02.492 --rc genhtml_function_coverage=1 00:09:02.492 --rc genhtml_legend=1 00:09:02.492 --rc geninfo_all_blocks=1 00:09:02.492 --rc geninfo_unexecuted_blocks=1 00:09:02.492 00:09:02.492 ' 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:02.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.492 --rc genhtml_branch_coverage=1 00:09:02.492 --rc genhtml_function_coverage=1 00:09:02.492 --rc genhtml_legend=1 00:09:02.492 --rc geninfo_all_blocks=1 00:09:02.492 --rc geninfo_unexecuted_blocks=1 00:09:02.492 00:09:02.492 ' 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:02.492 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:07.773 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:07.773 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:07.773 Found net devices under 0000:86:00.0: cvl_0_0 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.773 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:07.774 Found net devices under 0000:86:00.1: cvl_0_1 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.774 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:09:08.035 00:09:08.035 --- 10.0.0.2 ping statistics --- 00:09:08.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.035 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:09:08.035 00:09:08.035 --- 10.0.0.1 ping statistics --- 00:09:08.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.035 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1228845 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1228845 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1228845 ']' 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.035 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.296 [2024-11-28 08:07:50.355786] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:09:08.296 [2024-11-28 08:07:50.355836] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.296 [2024-11-28 08:07:50.426755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.296 [2024-11-28 08:07:50.469114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.296 [2024-11-28 08:07:50.469153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.296 [2024-11-28 08:07:50.469163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.296 [2024-11-28 08:07:50.469171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.296 [2024-11-28 08:07:50.469177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.296 [2024-11-28 08:07:50.469760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.556 [2024-11-28 08:07:50.606311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.556 Malloc0 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.556 [2024-11-28 08:07:50.656534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1228870 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1228870 /var/tmp/bdevperf.sock 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1228870 ']' 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:08.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.556 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.556 [2024-11-28 08:07:50.706043] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:09:08.557 [2024-11-28 08:07:50.706086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228870 ] 00:09:08.557 [2024-11-28 08:07:50.767407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.557 [2024-11-28 08:07:50.810522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.816 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.816 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:08.816 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:08.816 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.816 08:07:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.076 NVMe0n1 00:09:09.076 08:07:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.076 08:07:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:09.076 Running I/O for 10 seconds... 00:09:10.955 11264.00 IOPS, 44.00 MiB/s [2024-11-28T07:07:54.606Z] 11730.00 IOPS, 45.82 MiB/s [2024-11-28T07:07:55.546Z] 11760.67 IOPS, 45.94 MiB/s [2024-11-28T07:07:56.483Z] 11793.50 IOPS, 46.07 MiB/s [2024-11-28T07:07:57.423Z] 11878.60 IOPS, 46.40 MiB/s [2024-11-28T07:07:58.363Z] 11943.00 IOPS, 46.65 MiB/s [2024-11-28T07:07:59.303Z] 11990.86 IOPS, 46.84 MiB/s [2024-11-28T07:08:00.242Z] 12013.12 IOPS, 46.93 MiB/s [2024-11-28T07:08:01.623Z] 12063.67 IOPS, 47.12 MiB/s [2024-11-28T07:08:01.623Z] 12079.90 IOPS, 47.19 MiB/s 00:09:19.354 Latency(us) 00:09:19.354 [2024-11-28T07:08:01.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.354 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:19.354 Verification LBA range: start 0x0 length 0x4000 00:09:19.354 NVMe0n1 : 10.05 12109.84 47.30 0.00 0.00 84270.87 12537.32 56303.97 00:09:19.354 [2024-11-28T07:08:01.623Z] =================================================================================================================== 00:09:19.354 [2024-11-28T07:08:01.623Z] Total : 12109.84 47.30 0.00 0.00 84270.87 12537.32 56303.97 00:09:19.354 { 00:09:19.354 "results": [ 00:09:19.354 { 00:09:19.354 "job": "NVMe0n1", 00:09:19.354 "core_mask": "0x1", 00:09:19.354 "workload": "verify", 00:09:19.354 "status": "finished", 00:09:19.354 "verify_range": { 00:09:19.354 "start": 0, 00:09:19.354 "length": 16384 00:09:19.354 }, 00:09:19.354 "queue_depth": 1024, 00:09:19.354 "io_size": 4096, 00:09:19.354 "runtime": 10.050089, 00:09:19.354 "iops": 12109.843007360432, 00:09:19.354 "mibps": 47.30407424750169, 00:09:19.354 "io_failed": 0, 00:09:19.354 "io_timeout": 0, 00:09:19.354 "avg_latency_us": 84270.86828225771, 00:09:19.354 "min_latency_us": 12537.321739130435, 00:09:19.354 "max_latency_us": 56303.97217391304 00:09:19.354 } 00:09:19.354 ], 00:09:19.354 "core_count": 1 00:09:19.354 } 00:09:19.354 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1228870 00:09:19.354 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1228870 ']' 00:09:19.354 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1228870 00:09:19.354 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:19.354 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.354 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1228870 00:09:19.354 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.354 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.354 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1228870' 00:09:19.354 killing process with pid 1228870 00:09:19.354 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1228870 00:09:19.354 Received shutdown signal, test time was about 10.000000 seconds 00:09:19.354 00:09:19.354 Latency(us) 00:09:19.354 [2024-11-28T07:08:01.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.355 [2024-11-28T07:08:01.624Z] =================================================================================================================== 00:09:19.355 [2024-11-28T07:08:01.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1228870 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.355 rmmod nvme_tcp 00:09:19.355 rmmod nvme_fabrics 00:09:19.355 rmmod nvme_keyring 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1228845 ']' 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1228845 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1228845 ']' 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1228845 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1228845 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1228845' 00:09:19.355 killing process with pid 1228845 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1228845 00:09:19.355 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1228845 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.615 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.157 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.157 00:09:22.157 real 0m19.402s 00:09:22.157 user 0m23.173s 00:09:22.157 sys 0m5.698s 00:09:22.157 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.157 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.157 ************************************ 00:09:22.157 END TEST nvmf_queue_depth 00:09:22.157 ************************************ 00:09:22.157 08:08:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:22.157 08:08:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.157 08:08:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.158 08:08:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.158 ************************************ 00:09:22.158 START TEST nvmf_target_multipath 00:09:22.158 ************************************ 00:09:22.158 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:22.158 * Looking for test storage... 00:09:22.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.158 --rc genhtml_branch_coverage=1 00:09:22.158 --rc genhtml_function_coverage=1 00:09:22.158 --rc genhtml_legend=1 00:09:22.158 --rc geninfo_all_blocks=1 00:09:22.158 --rc geninfo_unexecuted_blocks=1 00:09:22.158 00:09:22.158 ' 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.158 --rc genhtml_branch_coverage=1 00:09:22.158 --rc genhtml_function_coverage=1 00:09:22.158 --rc genhtml_legend=1 00:09:22.158 --rc geninfo_all_blocks=1 00:09:22.158 --rc geninfo_unexecuted_blocks=1 00:09:22.158 00:09:22.158 ' 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.158 --rc genhtml_branch_coverage=1 00:09:22.158 --rc genhtml_function_coverage=1 00:09:22.158 --rc genhtml_legend=1 00:09:22.158 --rc geninfo_all_blocks=1 00:09:22.158 --rc geninfo_unexecuted_blocks=1 00:09:22.158 00:09:22.158 ' 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.158 --rc genhtml_branch_coverage=1 00:09:22.158 --rc genhtml_function_coverage=1 00:09:22.158 --rc genhtml_legend=1 00:09:22.158 --rc geninfo_all_blocks=1 00:09:22.158 --rc geninfo_unexecuted_blocks=1 00:09:22.158 00:09:22.158 ' 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.158 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.159 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.436 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:27.437 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:27.437 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:27.437 Found net devices under 0000:86:00.0: cvl_0_0 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:27.437 Found net devices under 0000:86:00.1: cvl_0_1 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.437 08:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:27.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:09:27.437 00:09:27.437 --- 10.0.0.2 ping statistics --- 00:09:27.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.437 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:09:27.437 00:09:27.437 --- 10.0.0.1 ping statistics --- 00:09:27.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.437 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:27.437 only one NIC for nvmf test 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.437 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.438 rmmod nvme_tcp 00:09:27.438 rmmod nvme_fabrics 00:09:27.438 rmmod nvme_keyring 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.438 08:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.348 00:09:29.348 real 0m7.362s 00:09:29.348 user 0m1.418s 00:09:29.348 sys 0m3.877s 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:29.348 ************************************ 00:09:29.348 END TEST nvmf_target_multipath 00:09:29.348 ************************************ 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.348 ************************************ 00:09:29.348 START TEST nvmf_zcopy 00:09:29.348 ************************************ 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:29.348 * Looking for test storage... 00:09:29.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:29.348 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.349 --rc genhtml_branch_coverage=1 00:09:29.349 --rc genhtml_function_coverage=1 00:09:29.349 --rc genhtml_legend=1 00:09:29.349 --rc geninfo_all_blocks=1 00:09:29.349 --rc geninfo_unexecuted_blocks=1 00:09:29.349 00:09:29.349 ' 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.349 --rc genhtml_branch_coverage=1 00:09:29.349 --rc genhtml_function_coverage=1 00:09:29.349 --rc genhtml_legend=1 00:09:29.349 --rc geninfo_all_blocks=1 00:09:29.349 --rc geninfo_unexecuted_blocks=1 00:09:29.349 00:09:29.349 ' 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.349 --rc genhtml_branch_coverage=1 00:09:29.349 --rc genhtml_function_coverage=1 00:09:29.349 --rc genhtml_legend=1 00:09:29.349 --rc geninfo_all_blocks=1 00:09:29.349 --rc geninfo_unexecuted_blocks=1 00:09:29.349 00:09:29.349 ' 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.349 --rc genhtml_branch_coverage=1 00:09:29.349 --rc genhtml_function_coverage=1 00:09:29.349 --rc genhtml_legend=1 00:09:29.349 --rc geninfo_all_blocks=1 00:09:29.349 --rc geninfo_unexecuted_blocks=1 00:09:29.349 00:09:29.349 ' 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.349 08:08:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:35.929 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:35.929 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:35.929 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:35.930 Found net devices under 0000:86:00.0: cvl_0_0 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:35.930 Found net devices under 0000:86:00.1: cvl_0_1 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:35.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:09:35.930 00:09:35.930 --- 10.0.0.2 ping statistics --- 00:09:35.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.930 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:09:35.930 00:09:35.930 --- 10.0.0.1 ping statistics --- 00:09:35.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.930 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1237680 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1237680 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1237680 ']' 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.930 [2024-11-28 08:08:17.368796] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:09:35.930 [2024-11-28 08:08:17.368839] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.930 [2024-11-28 08:08:17.435069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.930 [2024-11-28 08:08:17.476246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.930 [2024-11-28 08:08:17.476282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.930 [2024-11-28 08:08:17.476289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.930 [2024-11-28 08:08:17.476295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.930 [2024-11-28 08:08:17.476301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.930 [2024-11-28 08:08:17.476873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.930 [2024-11-28 08:08:17.609698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.930 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.930 [2024-11-28 08:08:17.629885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.931 malloc0 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:35.931 { 00:09:35.931 "params": { 00:09:35.931 "name": "Nvme$subsystem", 00:09:35.931 "trtype": "$TEST_TRANSPORT", 00:09:35.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.931 "adrfam": "ipv4", 00:09:35.931 "trsvcid": "$NVMF_PORT", 00:09:35.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.931 "hdgst": ${hdgst:-false}, 00:09:35.931 "ddgst": ${ddgst:-false} 00:09:35.931 }, 00:09:35.931 "method": "bdev_nvme_attach_controller" 00:09:35.931 } 00:09:35.931 EOF 00:09:35.931 )") 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:35.931 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:35.931 "params": { 00:09:35.931 "name": "Nvme1", 00:09:35.931 "trtype": "tcp", 00:09:35.931 "traddr": "10.0.0.2", 00:09:35.931 "adrfam": "ipv4", 00:09:35.931 "trsvcid": "4420", 00:09:35.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.931 "hdgst": false, 00:09:35.931 "ddgst": false 00:09:35.931 }, 00:09:35.931 "method": "bdev_nvme_attach_controller" 00:09:35.931 }' 00:09:35.931 [2024-11-28 08:08:17.710627] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:09:35.931 [2024-11-28 08:08:17.710672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237770 ] 00:09:35.931 [2024-11-28 08:08:17.771854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.931 [2024-11-28 08:08:17.814185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.931 Running I/O for 10 seconds... 00:09:37.882 8372.00 IOPS, 65.41 MiB/s [2024-11-28T07:08:21.534Z] 8437.00 IOPS, 65.91 MiB/s [2024-11-28T07:08:22.474Z] 8442.33 IOPS, 65.96 MiB/s [2024-11-28T07:08:23.415Z] 8466.00 IOPS, 66.14 MiB/s [2024-11-28T07:08:24.355Z] 8482.40 IOPS, 66.27 MiB/s [2024-11-28T07:08:25.294Z] 8486.33 IOPS, 66.30 MiB/s [2024-11-28T07:08:26.231Z] 8491.57 IOPS, 66.34 MiB/s [2024-11-28T07:08:27.171Z] 8498.75 IOPS, 66.40 MiB/s [2024-11-28T07:08:28.554Z] 8507.11 IOPS, 66.46 MiB/s [2024-11-28T07:08:28.554Z] 8513.50 IOPS, 66.51 MiB/s 00:09:46.285 Latency(us) 00:09:46.285 [2024-11-28T07:08:28.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.285 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:46.285 Verification LBA range: start 0x0 length 0x1000 00:09:46.285 Nvme1n1 : 10.01 8515.01 66.52 0.00 0.00 14988.86 1182.50 24846.69 00:09:46.285 [2024-11-28T07:08:28.554Z] =================================================================================================================== 00:09:46.285 [2024-11-28T07:08:28.554Z] Total : 8515.01 66.52 0.00 0.00 14988.86 1182.50 24846.69 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1239500 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.285 { 00:09:46.285 "params": { 00:09:46.285 "name": "Nvme$subsystem", 00:09:46.285 "trtype": "$TEST_TRANSPORT", 00:09:46.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.285 "adrfam": "ipv4", 00:09:46.285 "trsvcid": "$NVMF_PORT", 00:09:46.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.285 "hdgst": ${hdgst:-false}, 00:09:46.285 "ddgst": ${ddgst:-false} 00:09:46.285 }, 00:09:46.285 "method": "bdev_nvme_attach_controller" 00:09:46.285 } 00:09:46.285 EOF 00:09:46.285 )") 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:46.285 [2024-11-28 08:08:28.340269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.340301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:46.285 [2024-11-28 08:08:28.348264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.348277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.285 "params": { 00:09:46.285 "name": "Nvme1", 00:09:46.285 "trtype": "tcp", 00:09:46.285 "traddr": "10.0.0.2", 00:09:46.285 "adrfam": "ipv4", 00:09:46.285 "trsvcid": "4420", 00:09:46.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.285 "hdgst": false, 00:09:46.285 "ddgst": false 00:09:46.285 }, 00:09:46.285 "method": "bdev_nvme_attach_controller" 00:09:46.285 }' 00:09:46.285 [2024-11-28 08:08:28.356280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.356291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.364300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.364310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.372319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.372329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.379536] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:09:46.285 [2024-11-28 08:08:28.379577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239500 ] 00:09:46.285 [2024-11-28 08:08:28.384355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.384366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.392376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.392386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.400403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.400418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.408422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.408432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.416441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.416466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.424463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.424473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.432485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.432494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.440505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.440514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.440516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.285 [2024-11-28 08:08:28.452544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.452562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.464570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.464580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.476604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.285 [2024-11-28 08:08:28.476616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.285 [2024-11-28 08:08:28.483010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.286 [2024-11-28 08:08:28.488636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.286 [2024-11-28 08:08:28.488648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.286 [2024-11-28 08:08:28.500675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.286 [2024-11-28 08:08:28.500696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.286 [2024-11-28 08:08:28.512708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.286 [2024-11-28 08:08:28.512724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.286 [2024-11-28 08:08:28.524739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.286 [2024-11-28 08:08:28.524752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.286 [2024-11-28 08:08:28.536769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.286 [2024-11-28 08:08:28.536780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.286 [2024-11-28 08:08:28.548806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.286 [2024-11-28 08:08:28.548818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.560836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.560847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.573026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.573048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.585033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.585048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.597067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.597082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.609096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.609106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.621126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.621135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.633159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.633169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.645198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.645212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.657232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.657243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.669263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.669272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.681314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.681326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.693332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.693345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.705364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.705373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.717399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.717408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.729432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.729442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.741468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.741481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.753500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.753510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.765535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.765544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.777571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.777582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.547 [2024-11-28 08:08:28.789610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.547 [2024-11-28 08:08:28.789625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:28.831241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.831258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:28.841751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.841762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 Running I/O for 5 seconds... 00:09:46.808 [2024-11-28 08:08:28.858066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.858084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:28.873288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.873307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:28.888137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.888155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:28.903532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.903550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:28.918433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.918451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:28.933881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.933899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:28.948208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.948227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:28.962136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.962154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:28.976411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.976429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:28.990830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:28.990849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:29.001571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:29.001589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:29.016313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:29.016332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:29.030468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:29.030488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:29.045085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:29.045104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:29.055979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:29.055997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.808 [2024-11-28 08:08:29.070916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.808 [2024-11-28 08:08:29.070935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.081855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.081875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.096303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.096323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.110211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.110230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.124561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.124580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.135403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.135421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.149776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.149795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.164333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.164352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.175295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.175314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.189943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.189966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.203524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.203543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.217340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.217359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.231549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.231568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.245486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.245504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.259510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.259528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.273867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.273886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.287842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.287861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.301869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.301888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.315766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.315785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.068 [2024-11-28 08:08:29.329715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.068 [2024-11-28 08:08:29.329733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.343817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.343835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.357790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.357809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.371868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.371886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.385935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.385961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.399935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.399961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.414260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.414280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.428127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.428145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.442499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.442517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.456387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.456410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.470819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.470837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.481844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.481862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.496193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.496211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.510293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.510311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.524590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.524610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.535559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.535578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.550473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.550492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.565837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.565856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-11-28 08:08:29.580431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-11-28 08:08:29.580451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-11-28 08:08:29.591416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-11-28 08:08:29.591436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-11-28 08:08:29.606056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-11-28 08:08:29.606076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.620366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.620385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.634172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.634191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.647927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.647952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.662170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.662189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.676207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.676227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.687403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.687423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.701884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.701902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.715507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.715534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.729787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.729807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.744037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.744056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.758189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.758224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.772191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.772210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.786237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.786256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.800609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.800629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.811584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.811603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.826125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.826145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.839959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.839978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 16393.00 IOPS, 128.07 MiB/s [2024-11-28T07:08:29.870Z] [2024-11-28 08:08:29.853959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.853993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-11-28 08:08:29.868344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-11-28 08:08:29.868363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:29.879415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:29.879436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:29.894119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:29.894138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:29.908278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:29.908296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:29.919261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:29.919281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:29.933924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:29.933943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:29.948022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:29.948042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:29.958682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:29.958701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:29.973250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:29.973272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:29.986976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:29.986995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:30.001251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:30.001269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:30.015043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:30.015064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:30.029430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:30.029449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:30.041240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:30.041259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:30.055869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:30.055889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:30.067327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:30.067347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:30.082171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:30.082191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:30.092710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:30.092730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:30.107593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:30.107613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.861 [2024-11-28 08:08:30.123133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.861 [2024-11-28 08:08:30.123152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-11-28 08:08:30.137485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-11-28 08:08:30.137506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-11-28 08:08:30.152037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-11-28 08:08:30.152057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-11-28 08:08:30.163041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-11-28 08:08:30.163061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-11-28 08:08:30.177939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.120 [2024-11-28 08:08:30.177967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.120 [2024-11-28 08:08:30.189671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.189691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.204426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.204445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.215177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.215196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.230212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.230230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.245776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.245795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.260041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.260059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.271781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.271800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.286153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.286172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.300509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.300528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.314457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.314476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.328866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.328885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.339690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.339709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.354822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.354841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.370382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.370402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.121 [2024-11-28 08:08:30.385238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.121 [2024-11-28 08:08:30.385258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.380 [2024-11-28 08:08:30.396062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.380 [2024-11-28 08:08:30.396082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.380 [2024-11-28 08:08:30.410800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.380 [2024-11-28 08:08:30.410819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.380 [2024-11-28 08:08:30.421788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.380 [2024-11-28 08:08:30.421808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.380 [2024-11-28 08:08:30.436600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.380 [2024-11-28 08:08:30.436619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.380 [2024-11-28 08:08:30.447792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.380 [2024-11-28 08:08:30.447811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.380 [2024-11-28 08:08:30.462533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.380 [2024-11-28 08:08:30.462552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.380 [2024-11-28 08:08:30.473910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.380 [2024-11-28 08:08:30.473930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.380 [2024-11-28 08:08:30.488680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.380 [2024-11-28 08:08:30.488699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.380 [2024-11-28 08:08:30.504773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.380 [2024-11-28 08:08:30.504793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.380 [2024-11-28 08:08:30.519717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.380 [2024-11-28 08:08:30.519735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.380 [2024-11-28 08:08:30.535282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-11-28 08:08:30.535301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-11-28 08:08:30.549671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-11-28 08:08:30.549690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-11-28 08:08:30.564032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-11-28 08:08:30.564050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-11-28 08:08:30.579424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-11-28 08:08:30.579443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-11-28 08:08:30.594056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-11-28 08:08:30.594075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-11-28 08:08:30.605137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-11-28 08:08:30.605155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-11-28 08:08:30.619881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-11-28 08:08:30.619900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-11-28 08:08:30.631210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-11-28 08:08:30.631228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.381 [2024-11-28 08:08:30.645777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.381 [2024-11-28 08:08:30.645796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.659441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.659461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.673706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.673725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.688111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.688131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.698908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.698927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.713625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.713645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.727792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.727813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.742277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.742297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.756352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.756372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.770506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.770525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.784691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.784711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.799524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.799543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.814939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.814963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.829546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.829565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.841029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.841048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 16325.00 IOPS, 127.54 MiB/s [2024-11-28T07:08:30.910Z] [2024-11-28 08:08:30.855541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.855560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.869881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.869900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.885143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.885162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.641 [2024-11-28 08:08:30.899805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.641 [2024-11-28 08:08:30.899825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:30.914121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:30.914141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:30.928212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:30.928232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:30.942467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:30.942487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:30.954005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:30.954026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:30.969084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:30.969103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:30.984520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:30.984540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:30.999446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:30.999465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:31.015396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:31.015422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:31.029863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:31.029883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:31.041111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:31.041130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:31.055615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:31.055634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:31.070062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:31.070091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:31.081170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:31.081190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:31.095769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:31.095788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:31.109972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:31.109991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:31.123710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:31.123730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.901 [2024-11-28 08:08:31.138204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.901 [2024-11-28 08:08:31.138224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.902 [2024-11-28 08:08:31.152627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.902 [2024-11-28 08:08:31.152647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.902 [2024-11-28 08:08:31.163096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.902 [2024-11-28 08:08:31.163116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.177829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.177850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.191101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.191121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.205601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.205620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.220024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.220044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.231465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.231484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.246454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.246474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.262401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.262421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.276239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.276264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.290944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.290970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.306417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.306437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.320606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.320626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.335141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.335160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.350875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.350895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.365498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.365517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.376661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.376681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.391249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.391268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.405602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.405620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-11-28 08:08:31.420643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-11-28 08:08:31.420662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.421 [2024-11-28 08:08:31.434848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.421 [2024-11-28 08:08:31.434868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.421 [2024-11-28 08:08:31.449166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.421 [2024-11-28 08:08:31.449186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.421 [2024-11-28 08:08:31.463596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.421 [2024-11-28 08:08:31.463615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.421 [2024-11-28 08:08:31.474283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.421 [2024-11-28 08:08:31.474302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.421 [2024-11-28 08:08:31.489242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.421 [2024-11-28 08:08:31.489261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.421 [2024-11-28 08:08:31.500365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.421 [2024-11-28 08:08:31.500384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.421 [2024-11-28 08:08:31.515560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.421 [2024-11-28 08:08:31.515579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.421 [2024-11-28 08:08:31.530955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.421 [2024-11-28 08:08:31.530989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.421 [2024-11-28 08:08:31.545510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.422 [2024-11-28 08:08:31.545533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.422 [2024-11-28 08:08:31.559668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.422 [2024-11-28 08:08:31.559688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.422 [2024-11-28 08:08:31.570790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.422 [2024-11-28 08:08:31.570809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.422 [2024-11-28 08:08:31.585674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.422 [2024-11-28 08:08:31.585693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.422 [2024-11-28 08:08:31.596687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.422 [2024-11-28 08:08:31.596707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.422 [2024-11-28 08:08:31.611219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.422 [2024-11-28 08:08:31.611238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.422 [2024-11-28 08:08:31.625415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.422 [2024-11-28 08:08:31.625434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.422 [2024-11-28 08:08:31.639331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.422 [2024-11-28 08:08:31.639350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.422 [2024-11-28 08:08:31.653489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.422 [2024-11-28 08:08:31.653509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.422 [2024-11-28 08:08:31.667723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.422 [2024-11-28 08:08:31.667742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.422 [2024-11-28 08:08:31.681848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.422 [2024-11-28 08:08:31.681867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.695769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.695790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.709896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.709917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.723880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.723900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.737621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.737640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.751925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.751944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.765801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.765819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.780310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.780330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.791244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.791263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.805789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.805807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.818967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.818987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.833451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.833471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.844748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.844767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 16339.33 IOPS, 127.65 MiB/s [2024-11-28T07:08:31.951Z] [2024-11-28 08:08:31.859207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.859227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.873128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.873147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.887106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.887126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.901589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.901608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.913245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.913264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.927973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.927992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.682 [2024-11-28 08:08:31.939594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.682 [2024-11-28 08:08:31.939613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.942 [2024-11-28 08:08:31.953924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.942 [2024-11-28 08:08:31.953943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.942 [2024-11-28 08:08:31.967736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:31.967755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:31.982138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:31.982157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:31.996519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:31.996537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.012320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.012338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.026506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.026525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.040887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.040907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.054789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.054808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.069124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.069143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.080600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.080619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.094933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.094957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.108923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.108942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.123005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.123025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.137551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.137571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.148566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.148586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.163082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.163101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.177211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.177230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.191570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.191589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.943 [2024-11-28 08:08:32.203018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.943 [2024-11-28 08:08:32.203037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.217592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.217612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.231347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.231366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.245519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.245538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.260264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.260283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.271328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.271347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.285837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.285856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.299764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.299782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.314101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.314121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.325839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.325859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.341023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.341043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.356159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.356180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.370321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.370341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.384764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.384784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.398416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.398436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.412963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.412983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.427420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.427440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.441686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.441706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.455766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.455785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.203 [2024-11-28 08:08:32.470160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.203 [2024-11-28 08:08:32.470180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.484743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.484764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.495530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.495550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.510653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.510672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.526037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.526056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.540146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.540166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.554144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.554163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.568343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.568363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.582453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.582478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.596629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.596648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.610993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.611014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.625154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.625174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.638907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.638927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.653719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.653738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.668762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.668781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.683318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.683339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.694411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.694431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.709115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.709135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.463 [2024-11-28 08:08:32.723165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.463 [2024-11-28 08:08:32.723185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.737463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.737483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.752002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.752021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.763111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.763140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.778217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.778235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.793596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.793614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.808184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.808203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.821718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.821738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.835734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.835753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.849122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.849146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 16337.50 IOPS, 127.64 MiB/s [2024-11-28T07:08:32.992Z] [2024-11-28 08:08:32.863239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.863258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.877472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.877491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.891624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.891642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.906173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.906193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.917466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.917485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.932822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.932841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.948366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.948385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.963048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.963067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.723 [2024-11-28 08:08:32.977145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.723 [2024-11-28 08:08:32.977165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:32.991942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:32.991966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.003281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.003300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.017548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.017567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.031493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.031512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.045952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.045973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.060140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.060160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.070745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.070764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.085434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.085453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.099301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.099320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.113864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.113888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.124569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.124588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.139224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.139244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.153166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.153186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.167886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.167905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.183533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.183555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.198136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.198155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.212789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.212808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.226885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.226904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.241398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.241417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.988 [2024-11-28 08:08:33.256039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.988 [2024-11-28 08:08:33.256058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.249 [2024-11-28 08:08:33.267666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.249 [2024-11-28 08:08:33.267686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.282494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.282513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.293684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.293703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.308443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.308462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.319357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.319376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.334135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.334154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.345262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.345281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.360055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.360074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.373756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.373776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.388572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.388591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.404671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.404690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.418975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.418994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.432785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.432805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.447395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.447414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.461662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.461681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.472614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.472634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.487075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.487094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.500980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.500998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.250 [2024-11-28 08:08:33.515084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.250 [2024-11-28 08:08:33.515103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.509 [2024-11-28 08:08:33.529457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.509 [2024-11-28 08:08:33.529477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.509 [2024-11-28 08:08:33.543756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.509 [2024-11-28 08:08:33.543776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.509 [2024-11-28 08:08:33.557870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.509 [2024-11-28 08:08:33.557888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.509 [2024-11-28 08:08:33.572001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.509 [2024-11-28 08:08:33.572020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.509 [2024-11-28 08:08:33.585997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.509 [2024-11-28 08:08:33.586017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.509 [2024-11-28 08:08:33.600502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.600521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.616152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.616172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.630769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.630788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.646228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.646247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.660231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.660250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.674321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.674340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.688667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.688686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.702687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.702708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.717145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.717165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.728084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.728105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.743168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.743188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.759129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.759149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.510 [2024-11-28 08:08:33.773567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.510 [2024-11-28 08:08:33.773586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.784375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.784395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.799192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.799212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.812465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.812484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.827350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.827370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.838882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.838902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.853810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.853829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 16328.60 IOPS, 127.57 MiB/s [2024-11-28T07:08:34.039Z] [2024-11-28 08:08:33.865781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.865800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 00:09:51.770 Latency(us) 00:09:51.770 [2024-11-28T07:08:34.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.770 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:51.770 Nvme1n1 : 5.01 16331.55 127.59 0.00 0.00 7829.57 3704.21 16184.54 00:09:51.770 [2024-11-28T07:08:34.039Z] =================================================================================================================== 00:09:51.770 [2024-11-28T07:08:34.039Z] Total : 16331.55 127.59 0.00 0.00 7829.57 3704.21 16184.54 00:09:51.770 [2024-11-28 08:08:33.877805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.877823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.889841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.889857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.901878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.901900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.913905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.913921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.925938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.925960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.937971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.937986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.950003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.950019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.962033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.962049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.974079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.974095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.986104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.986118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:33.998142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.770 [2024-11-28 08:08:33.998156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.770 [2024-11-28 08:08:34.010174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.771 [2024-11-28 08:08:34.010191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.771 [2024-11-28 08:08:34.022203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.771 [2024-11-28 08:08:34.022215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1239500) - No such process 00:09:51.771 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1239500 00:09:51.771 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.771 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.771 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.031 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.031 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:52.031 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.031 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.031 delay0 00:09:52.031 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.031 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:52.031 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.031 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.031 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.031 08:08:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:52.031 [2024-11-28 08:08:34.115878] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:58.606 [2024-11-28 08:08:40.271286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6070 is same with the state(6) to be set 00:09:58.606 [2024-11-28 08:08:40.271327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6070 is same with the state(6) to be set 00:09:58.606 Initializing NVMe Controllers 00:09:58.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:58.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:58.606 Initialization complete. Launching workers. 00:09:58.606 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 820 00:09:58.606 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1107, failed to submit 33 00:09:58.606 success 910, unsuccessful 197, failed 0 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.606 rmmod nvme_tcp 00:09:58.606 rmmod nvme_fabrics 00:09:58.606 rmmod nvme_keyring 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1237680 ']' 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1237680 00:09:58.606 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1237680 ']' 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1237680 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1237680 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1237680' 00:09:58.607 killing process with pid 1237680 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1237680 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1237680 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.607 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.517 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.517 00:10:00.517 real 0m31.243s 00:10:00.517 user 0m42.225s 00:10:00.517 sys 0m10.799s 00:10:00.517 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.517 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.517 ************************************ 00:10:00.517 END TEST nvmf_zcopy 00:10:00.517 ************************************ 00:10:00.517 08:08:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.517 08:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.517 08:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.517 08:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.517 ************************************ 00:10:00.517 START TEST nvmf_nmic 00:10:00.517 ************************************ 00:10:00.517 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.517 * Looking for test storage... 00:10:00.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.517 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.517 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.517 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.778 --rc genhtml_branch_coverage=1 00:10:00.778 --rc genhtml_function_coverage=1 00:10:00.778 --rc genhtml_legend=1 00:10:00.778 --rc geninfo_all_blocks=1 00:10:00.778 --rc geninfo_unexecuted_blocks=1 00:10:00.778 00:10:00.778 ' 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.778 --rc genhtml_branch_coverage=1 00:10:00.778 --rc genhtml_function_coverage=1 00:10:00.778 --rc genhtml_legend=1 00:10:00.778 --rc geninfo_all_blocks=1 00:10:00.778 --rc geninfo_unexecuted_blocks=1 00:10:00.778 00:10:00.778 ' 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.778 --rc genhtml_branch_coverage=1 00:10:00.778 --rc genhtml_function_coverage=1 00:10:00.778 --rc genhtml_legend=1 00:10:00.778 --rc geninfo_all_blocks=1 00:10:00.778 --rc geninfo_unexecuted_blocks=1 00:10:00.778 00:10:00.778 ' 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.778 --rc genhtml_branch_coverage=1 00:10:00.778 --rc genhtml_function_coverage=1 00:10:00.778 --rc genhtml_legend=1 00:10:00.778 --rc geninfo_all_blocks=1 00:10:00.778 --rc geninfo_unexecuted_blocks=1 00:10:00.778 00:10:00.778 ' 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.778 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.779 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:06.060 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:06.060 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:06.060 Found net devices under 0000:86:00.0: cvl_0_0 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.060 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:06.061 Found net devices under 0000:86:00.1: cvl_0_1 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.061 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:10:06.321 00:10:06.321 --- 10.0.0.2 ping statistics --- 00:10:06.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.321 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:10:06.321 00:10:06.321 --- 10.0.0.1 ping statistics --- 00:10:06.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.321 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1244985 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1244985 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1244985 ']' 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.321 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.321 [2024-11-28 08:08:48.453085] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:10:06.321 [2024-11-28 08:08:48.453137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.321 [2024-11-28 08:08:48.520030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.321 [2024-11-28 08:08:48.564602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.321 [2024-11-28 08:08:48.564638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.321 [2024-11-28 08:08:48.564649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.321 [2024-11-28 08:08:48.564656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.321 [2024-11-28 08:08:48.564661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.321 [2024-11-28 08:08:48.566187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.321 [2024-11-28 08:08:48.566286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.321 [2024-11-28 08:08:48.566369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.321 [2024-11-28 08:08:48.566371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.581 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.581 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:06.581 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.582 [2024-11-28 08:08:48.705559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.582 Malloc0 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.582 [2024-11-28 08:08:48.780483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:06.582 test case1: single bdev can't be used in multiple subsystems 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.582 [2024-11-28 08:08:48.808414] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:06.582 [2024-11-28 08:08:48.808434] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:06.582 [2024-11-28 08:08:48.808441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.582 request: 00:10:06.582 { 00:10:06.582 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:06.582 "namespace": { 00:10:06.582 "bdev_name": "Malloc0", 00:10:06.582 "no_auto_visible": false, 00:10:06.582 "hide_metadata": false 00:10:06.582 }, 00:10:06.582 "method": "nvmf_subsystem_add_ns", 00:10:06.582 "req_id": 1 00:10:06.582 } 00:10:06.582 Got JSON-RPC error response 00:10:06.582 response: 00:10:06.582 { 00:10:06.582 "code": -32602, 00:10:06.582 "message": "Invalid parameters" 00:10:06.582 } 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:06.582 Adding namespace failed - expected result. 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:06.582 test case2: host connect to nvmf target in multiple paths 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.582 [2024-11-28 08:08:48.820547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.582 08:08:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:07.966 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:08.902 08:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.902 08:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:08.902 08:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.902 08:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:08.902 08:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:11.440 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:11.440 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:11.440 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:11.440 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:11.440 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:11.440 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:11.440 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:11.440 [global] 00:10:11.440 thread=1 00:10:11.440 invalidate=1 00:10:11.440 rw=write 00:10:11.440 time_based=1 00:10:11.440 runtime=1 00:10:11.440 ioengine=libaio 00:10:11.440 direct=1 00:10:11.440 bs=4096 00:10:11.440 iodepth=1 00:10:11.440 norandommap=0 00:10:11.440 numjobs=1 00:10:11.440 00:10:11.440 verify_dump=1 00:10:11.440 verify_backlog=512 00:10:11.440 verify_state_save=0 00:10:11.440 do_verify=1 00:10:11.440 verify=crc32c-intel 00:10:11.440 [job0] 00:10:11.440 filename=/dev/nvme0n1 00:10:11.440 Could not set queue depth (nvme0n1) 00:10:11.440 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.440 fio-3.35 00:10:11.440 Starting 1 thread 00:10:12.378 00:10:12.378 job0: (groupid=0, jobs=1): err= 0: pid=1246057: Thu Nov 28 08:08:54 2024 00:10:12.378 read: IOPS=2081, BW=8328KiB/s (8528kB/s)(8336KiB/1001msec) 00:10:12.378 slat (nsec): min=7366, max=37022, avg=8444.05, stdev=1314.76 00:10:12.378 clat (usec): min=190, max=647, avg=233.55, stdev=22.42 00:10:12.378 lat (usec): min=198, max=656, avg=242.00, stdev=22.63 00:10:12.378 clat percentiles (usec): 00:10:12.378 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 217], 00:10:12.378 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 239], 00:10:12.378 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:10:12.378 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 474], 99.95th=[ 502], 00:10:12.378 | 99.99th=[ 652] 00:10:12.378 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:12.378 slat (usec): min=9, max=25725, avg=21.98, stdev=508.22 00:10:12.378 clat (usec): min=124, max=357, avg=165.87, stdev=20.52 00:10:12.378 lat (usec): min=135, max=26023, avg=187.84, stdev=511.24 00:10:12.378 clat percentiles (usec): 00:10:12.378 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 151], 20.00th=[ 157], 00:10:12.378 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:10:12.378 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 202], 00:10:12.378 | 99.00th=[ 245], 99.50th=[ 245], 99.90th=[ 297], 99.95th=[ 355], 00:10:12.378 | 99.99th=[ 359] 00:10:12.378 bw ( KiB/s): min= 9608, max= 9608, per=93.92%, avg=9608.00, stdev= 0.00, samples=1 00:10:12.378 iops : min= 2402, max= 2402, avg=2402.00, stdev= 0.00, samples=1 00:10:12.378 lat (usec) : 250=87.73%, 500=12.23%, 750=0.04% 00:10:12.378 cpu : usr=5.10%, sys=6.10%, ctx=4649, majf=0, minf=1 00:10:12.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.378 issued rwts: total=2084,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.378 00:10:12.378 Run status group 0 (all jobs): 00:10:12.378 READ: bw=8328KiB/s (8528kB/s), 8328KiB/s-8328KiB/s (8528kB/s-8528kB/s), io=8336KiB (8536kB), run=1001-1001msec 00:10:12.378 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:12.378 00:10:12.378 Disk stats (read/write): 00:10:12.378 nvme0n1: ios=2067/2048, merge=0/0, ticks=1431/311, in_queue=1742, util=98.40% 00:10:12.378 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.639 rmmod nvme_tcp 00:10:12.639 rmmod nvme_fabrics 00:10:12.639 rmmod nvme_keyring 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1244985 ']' 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1244985 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1244985 ']' 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1244985 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1244985 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1244985' 00:10:12.639 killing process with pid 1244985 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1244985 00:10:12.639 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1244985 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.900 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.438 00:10:15.438 real 0m14.435s 00:10:15.438 user 0m32.536s 00:10:15.438 sys 0m4.991s 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.438 ************************************ 00:10:15.438 END TEST nvmf_nmic 00:10:15.438 ************************************ 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.438 ************************************ 00:10:15.438 START TEST nvmf_fio_target 00:10:15.438 ************************************ 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:15.438 * Looking for test storage... 00:10:15.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.438 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:15.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.438 --rc genhtml_branch_coverage=1 00:10:15.438 --rc genhtml_function_coverage=1 00:10:15.438 --rc genhtml_legend=1 00:10:15.438 --rc geninfo_all_blocks=1 00:10:15.438 --rc geninfo_unexecuted_blocks=1 00:10:15.438 00:10:15.438 ' 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:15.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.439 --rc genhtml_branch_coverage=1 00:10:15.439 --rc genhtml_function_coverage=1 00:10:15.439 --rc genhtml_legend=1 00:10:15.439 --rc geninfo_all_blocks=1 00:10:15.439 --rc geninfo_unexecuted_blocks=1 00:10:15.439 00:10:15.439 ' 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:15.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.439 --rc genhtml_branch_coverage=1 00:10:15.439 --rc genhtml_function_coverage=1 00:10:15.439 --rc genhtml_legend=1 00:10:15.439 --rc geninfo_all_blocks=1 00:10:15.439 --rc geninfo_unexecuted_blocks=1 00:10:15.439 00:10:15.439 ' 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:15.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.439 --rc genhtml_branch_coverage=1 00:10:15.439 --rc genhtml_function_coverage=1 00:10:15.439 --rc genhtml_legend=1 00:10:15.439 --rc geninfo_all_blocks=1 00:10:15.439 --rc geninfo_unexecuted_blocks=1 00:10:15.439 00:10:15.439 ' 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.439 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:20.721 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:20.721 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:20.721 Found net devices under 0000:86:00.0: cvl_0_0 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:20.721 Found net devices under 0000:86:00.1: cvl_0_1 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.721 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:10:20.722 00:10:20.722 --- 10.0.0.2 ping statistics --- 00:10:20.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.722 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:10:20.722 00:10:20.722 --- 10.0.0.1 ping statistics --- 00:10:20.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.722 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1249792 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1249792 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1249792 ']' 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.722 08:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.722 [2024-11-28 08:09:02.911417] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:10:20.722 [2024-11-28 08:09:02.911463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.722 [2024-11-28 08:09:02.982035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.982 [2024-11-28 08:09:03.025378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.983 [2024-11-28 08:09:03.025414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.983 [2024-11-28 08:09:03.025422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.983 [2024-11-28 08:09:03.025429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.983 [2024-11-28 08:09:03.025435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.983 [2024-11-28 08:09:03.026885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.983 [2024-11-28 08:09:03.026907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.983 [2024-11-28 08:09:03.026979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.983 [2024-11-28 08:09:03.026981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.983 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.983 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:20.983 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.983 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:20.983 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.983 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.983 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.242 [2024-11-28 08:09:03.338733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.242 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.501 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:21.501 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.761 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:21.761 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.761 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:21.761 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.021 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:22.021 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:22.281 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.541 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:22.541 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.800 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:22.800 08:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.062 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:23.062 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:23.062 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:23.322 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:23.322 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.581 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:23.581 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:23.841 08:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.841 [2024-11-28 08:09:06.079022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.841 08:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:24.101 08:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:24.360 08:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:25.741 08:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:25.741 08:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:25.741 08:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.741 08:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:25.741 08:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:25.741 08:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:27.649 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:27.649 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:27.649 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:27.649 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:27.649 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:27.649 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:27.649 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:27.649 [global] 00:10:27.649 thread=1 00:10:27.649 invalidate=1 00:10:27.649 rw=write 00:10:27.649 time_based=1 00:10:27.649 runtime=1 00:10:27.649 ioengine=libaio 00:10:27.649 direct=1 00:10:27.649 bs=4096 00:10:27.649 iodepth=1 00:10:27.649 norandommap=0 00:10:27.649 numjobs=1 00:10:27.649 00:10:27.649 verify_dump=1 00:10:27.649 verify_backlog=512 00:10:27.649 verify_state_save=0 00:10:27.649 do_verify=1 00:10:27.649 verify=crc32c-intel 00:10:27.649 [job0] 00:10:27.649 filename=/dev/nvme0n1 00:10:27.649 [job1] 00:10:27.649 filename=/dev/nvme0n2 00:10:27.649 [job2] 00:10:27.649 filename=/dev/nvme0n3 00:10:27.649 [job3] 00:10:27.649 filename=/dev/nvme0n4 00:10:27.649 Could not set queue depth (nvme0n1) 00:10:27.649 Could not set queue depth (nvme0n2) 00:10:27.649 Could not set queue depth (nvme0n3) 00:10:27.649 Could not set queue depth (nvme0n4) 00:10:27.909 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.909 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.909 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.909 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.909 fio-3.35 00:10:27.909 Starting 4 threads 00:10:29.339 00:10:29.339 job0: (groupid=0, jobs=1): err= 0: pid=1251664: Thu Nov 28 08:09:11 2024 00:10:29.339 read: IOPS=325, BW=1303KiB/s (1334kB/s)(1304KiB/1001msec) 00:10:29.339 slat (nsec): min=7447, max=40912, avg=9473.46, stdev=3977.47 00:10:29.339 clat (usec): min=178, max=41151, avg=2718.63, stdev=9793.93 00:10:29.339 lat (usec): min=186, max=41162, avg=2728.10, stdev=9797.21 00:10:29.339 clat percentiles (usec): 00:10:29.339 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:10:29.339 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:10:29.339 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[41157], 00:10:29.339 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:29.339 | 99.99th=[41157] 00:10:29.339 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:29.339 slat (nsec): min=10779, max=46643, avg=12223.34, stdev=2485.63 00:10:29.339 clat (usec): min=129, max=364, avg=192.89, stdev=36.44 00:10:29.339 lat (usec): min=140, max=404, avg=205.11, stdev=36.78 00:10:29.339 clat percentiles (usec): 00:10:29.339 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 151], 20.00th=[ 163], 00:10:29.339 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 194], 00:10:29.339 | 70.00th=[ 212], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 253], 00:10:29.339 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 363], 99.95th=[ 363], 00:10:29.339 | 99.99th=[ 363] 00:10:29.339 bw ( KiB/s): min= 4096, max= 4096, per=25.18%, avg=4096.00, stdev= 0.00, samples=1 00:10:29.339 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:29.339 lat (usec) : 250=93.44%, 500=4.18% 00:10:29.339 lat (msec) : 50=2.39% 00:10:29.339 cpu : usr=0.40%, sys=1.70%, ctx=839, majf=0, minf=1 00:10:29.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.339 issued rwts: total=326,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.339 job1: (groupid=0, jobs=1): err= 0: pid=1251679: Thu Nov 28 08:09:11 2024 00:10:29.339 read: IOPS=2454, BW=9818KiB/s (10.1MB/s)(9828KiB/1001msec) 00:10:29.339 slat (nsec): min=6393, max=27554, avg=7338.46, stdev=981.56 00:10:29.339 clat (usec): min=167, max=1288, avg=220.19, stdev=39.20 00:10:29.339 lat (usec): min=174, max=1295, avg=227.52, stdev=39.22 00:10:29.339 clat percentiles (usec): 00:10:29.339 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:10:29.339 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 219], 00:10:29.339 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:10:29.339 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 416], 99.95th=[ 424], 00:10:29.339 | 99.99th=[ 1287] 00:10:29.339 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:29.339 slat (nsec): min=9269, max=40533, avg=10603.26, stdev=1526.93 00:10:29.339 clat (usec): min=119, max=398, avg=156.25, stdev=38.59 00:10:29.339 lat (usec): min=129, max=439, avg=166.85, stdev=38.89 00:10:29.339 clat percentiles (usec): 00:10:29.339 | 1.00th=[ 122], 5.00th=[ 126], 10.00th=[ 128], 20.00th=[ 131], 00:10:29.339 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:10:29.339 | 70.00th=[ 157], 80.00th=[ 182], 90.00th=[ 239], 95.00th=[ 243], 00:10:29.339 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 367], 99.95th=[ 379], 00:10:29.339 | 99.99th=[ 400] 00:10:29.339 bw ( KiB/s): min=11232, max=11232, per=69.03%, avg=11232.00, stdev= 0.00, samples=1 00:10:29.339 iops : min= 2808, max= 2808, avg=2808.00, stdev= 0.00, samples=1 00:10:29.339 lat (usec) : 250=86.76%, 500=13.22% 00:10:29.339 lat (msec) : 2=0.02% 00:10:29.339 cpu : usr=2.40%, sys=4.60%, ctx=5018, majf=0, minf=1 00:10:29.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.340 issued rwts: total=2457,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.340 job2: (groupid=0, jobs=1): err= 0: pid=1251688: Thu Nov 28 08:09:11 2024 00:10:29.340 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:10:29.340 slat (nsec): min=11614, max=24327, avg=16721.00, stdev=2972.45 00:10:29.340 clat (usec): min=40652, max=41064, avg=40960.57, stdev=78.36 00:10:29.340 lat (usec): min=40664, max=41088, avg=40977.29, stdev=79.48 00:10:29.340 clat percentiles (usec): 00:10:29.340 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:29.340 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:29.340 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:29.340 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:29.340 | 99.99th=[41157] 00:10:29.340 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:29.340 slat (usec): min=11, max=18814, avg=50.95, stdev=830.88 00:10:29.340 clat (usec): min=146, max=367, avg=219.05, stdev=28.43 00:10:29.340 lat (usec): min=159, max=19107, avg=270.01, stdev=834.63 00:10:29.340 clat percentiles (usec): 00:10:29.340 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 188], 00:10:29.340 | 30.00th=[ 202], 40.00th=[ 219], 50.00th=[ 233], 60.00th=[ 237], 00:10:29.340 | 70.00th=[ 239], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 249], 00:10:29.340 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 367], 99.95th=[ 367], 00:10:29.340 | 99.99th=[ 367] 00:10:29.340 bw ( KiB/s): min= 4096, max= 4096, per=25.18%, avg=4096.00, stdev= 0.00, samples=1 00:10:29.340 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:29.340 lat (usec) : 250=91.37%, 500=4.69% 00:10:29.340 lat (msec) : 50=3.94% 00:10:29.340 cpu : usr=0.90%, sys=0.60%, ctx=535, majf=0, minf=1 00:10:29.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.340 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.340 job3: (groupid=0, jobs=1): err= 0: pid=1251690: Thu Nov 28 08:09:11 2024 00:10:29.340 read: IOPS=46, BW=187KiB/s (191kB/s)(188KiB/1007msec) 00:10:29.340 slat (nsec): min=8995, max=25927, avg=15723.70, stdev=6343.42 00:10:29.340 clat (usec): min=206, max=41281, avg=18470.61, stdev=20444.56 00:10:29.340 lat (usec): min=216, max=41293, avg=18486.33, stdev=20449.82 00:10:29.340 clat percentiles (usec): 00:10:29.340 | 1.00th=[ 206], 5.00th=[ 227], 10.00th=[ 241], 20.00th=[ 258], 00:10:29.340 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 461], 60.00th=[41157], 00:10:29.340 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:29.340 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:29.340 | 99.99th=[41157] 00:10:29.340 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:29.340 slat (usec): min=12, max=18891, avg=55.43, stdev=834.12 00:10:29.340 clat (usec): min=146, max=348, avg=202.78, stdev=25.52 00:10:29.340 lat (usec): min=172, max=19132, avg=258.21, stdev=836.21 00:10:29.340 clat percentiles (usec): 00:10:29.340 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:10:29.340 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:10:29.340 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 245], 00:10:29.340 | 99.00th=[ 285], 99.50th=[ 338], 99.90th=[ 351], 99.95th=[ 351], 00:10:29.340 | 99.99th=[ 351] 00:10:29.340 bw ( KiB/s): min= 4096, max= 4096, per=25.18%, avg=4096.00, stdev= 0.00, samples=1 00:10:29.340 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:29.340 lat (usec) : 250=89.80%, 500=6.44% 00:10:29.340 lat (msec) : 50=3.76% 00:10:29.340 cpu : usr=0.30%, sys=1.49%, ctx=561, majf=0, minf=1 00:10:29.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.340 issued rwts: total=47,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.340 00:10:29.340 Run status group 0 (all jobs): 00:10:29.340 READ: bw=11.1MiB/s (11.6MB/s), 83.7KiB/s-9818KiB/s (85.7kB/s-10.1MB/s), io=11.1MiB (11.7MB), run=1001-1007msec 00:10:29.340 WRITE: bw=15.9MiB/s (16.7MB/s), 2034KiB/s-9.99MiB/s (2083kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1007msec 00:10:29.340 00:10:29.340 Disk stats (read/write): 00:10:29.340 nvme0n1: ios=45/512, merge=0/0, ticks=1730/89, in_queue=1819, util=97.70% 00:10:29.340 nvme0n2: ios=2073/2263, merge=0/0, ticks=1417/343, in_queue=1760, util=97.97% 00:10:29.340 nvme0n3: ios=41/512, merge=0/0, ticks=1648/103, in_queue=1751, util=97.91% 00:10:29.340 nvme0n4: ios=40/512, merge=0/0, ticks=1607/95, in_queue=1702, util=97.89% 00:10:29.340 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:29.340 [global] 00:10:29.340 thread=1 00:10:29.340 invalidate=1 00:10:29.340 rw=randwrite 00:10:29.340 time_based=1 00:10:29.340 runtime=1 00:10:29.340 ioengine=libaio 00:10:29.340 direct=1 00:10:29.340 bs=4096 00:10:29.340 iodepth=1 00:10:29.340 norandommap=0 00:10:29.340 numjobs=1 00:10:29.340 00:10:29.340 verify_dump=1 00:10:29.340 verify_backlog=512 00:10:29.340 verify_state_save=0 00:10:29.340 do_verify=1 00:10:29.340 verify=crc32c-intel 00:10:29.340 [job0] 00:10:29.340 filename=/dev/nvme0n1 00:10:29.340 [job1] 00:10:29.340 filename=/dev/nvme0n2 00:10:29.340 [job2] 00:10:29.340 filename=/dev/nvme0n3 00:10:29.340 [job3] 00:10:29.340 filename=/dev/nvme0n4 00:10:29.340 Could not set queue depth (nvme0n1) 00:10:29.340 Could not set queue depth (nvme0n2) 00:10:29.340 Could not set queue depth (nvme0n3) 00:10:29.340 Could not set queue depth (nvme0n4) 00:10:29.598 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.598 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.598 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.598 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.598 fio-3.35 00:10:29.598 Starting 4 threads 00:10:30.967 00:10:30.967 job0: (groupid=0, jobs=1): err= 0: pid=1252064: Thu Nov 28 08:09:12 2024 00:10:30.967 read: IOPS=2194, BW=8779KiB/s (8990kB/s)(8788KiB/1001msec) 00:10:30.967 slat (nsec): min=7338, max=38953, avg=8820.24, stdev=1654.77 00:10:30.967 clat (usec): min=202, max=1262, avg=243.17, stdev=27.94 00:10:30.967 lat (usec): min=211, max=1270, avg=251.99, stdev=28.17 00:10:30.967 clat percentiles (usec): 00:10:30.967 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:10:30.967 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:10:30.967 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:10:30.967 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 371], 99.95th=[ 416], 00:10:30.967 | 99.99th=[ 1270] 00:10:30.967 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:30.967 slat (nsec): min=10467, max=42666, avg=11695.27, stdev=1894.53 00:10:30.967 clat (usec): min=124, max=248, avg=157.19, stdev=20.21 00:10:30.967 lat (usec): min=137, max=262, avg=168.88, stdev=20.34 00:10:30.967 clat percentiles (usec): 00:10:30.967 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:10:30.967 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:10:30.967 | 70.00th=[ 157], 80.00th=[ 176], 90.00th=[ 192], 95.00th=[ 200], 00:10:30.967 | 99.00th=[ 217], 99.50th=[ 221], 99.90th=[ 245], 99.95th=[ 247], 00:10:30.967 | 99.99th=[ 249] 00:10:30.967 bw ( KiB/s): min=10296, max=10296, per=42.52%, avg=10296.00, stdev= 0.00, samples=1 00:10:30.967 iops : min= 2574, max= 2574, avg=2574.00, stdev= 0.00, samples=1 00:10:30.967 lat (usec) : 250=86.55%, 500=13.43% 00:10:30.967 lat (msec) : 2=0.02% 00:10:30.967 cpu : usr=3.80%, sys=7.90%, ctx=4760, majf=0, minf=1 00:10:30.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.967 issued rwts: total=2197,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.967 job1: (groupid=0, jobs=1): err= 0: pid=1252065: Thu Nov 28 08:09:12 2024 00:10:30.967 read: IOPS=53, BW=213KiB/s (218kB/s)(216KiB/1015msec) 00:10:30.967 slat (nsec): min=8128, max=36539, avg=15727.85, stdev=7805.39 00:10:30.967 clat (usec): min=204, max=41985, avg=16906.93, stdev=20270.85 00:10:30.967 lat (usec): min=221, max=42008, avg=16922.66, stdev=20276.22 00:10:30.967 clat percentiles (usec): 00:10:30.967 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 233], 00:10:30.967 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 260], 60.00th=[40633], 00:10:30.967 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:30.967 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:30.967 | 99.99th=[42206] 00:10:30.967 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:10:30.967 slat (nsec): min=9515, max=37318, avg=10691.91, stdev=1705.04 00:10:30.967 clat (usec): min=148, max=380, avg=182.91, stdev=16.46 00:10:30.967 lat (usec): min=159, max=417, avg=193.60, stdev=17.19 00:10:30.967 clat percentiles (usec): 00:10:30.967 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:10:30.967 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:10:30.967 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 198], 95.00th=[ 204], 00:10:30.967 | 99.00th=[ 219], 99.50th=[ 269], 99.90th=[ 379], 99.95th=[ 379], 00:10:30.967 | 99.99th=[ 379] 00:10:30.967 bw ( KiB/s): min= 4096, max= 4096, per=16.92%, avg=4096.00, stdev= 0.00, samples=1 00:10:30.967 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:30.967 lat (usec) : 250=94.17%, 500=1.77%, 750=0.18% 00:10:30.967 lat (msec) : 50=3.89% 00:10:30.967 cpu : usr=0.30%, sys=0.59%, ctx=567, majf=0, minf=1 00:10:30.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.967 issued rwts: total=54,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.968 job2: (groupid=0, jobs=1): err= 0: pid=1252066: Thu Nov 28 08:09:12 2024 00:10:30.968 read: IOPS=2199, BW=8799KiB/s (9010kB/s)(8808KiB/1001msec) 00:10:30.968 slat (nsec): min=6439, max=26018, avg=7490.32, stdev=845.25 00:10:30.968 clat (usec): min=182, max=396, avg=243.46, stdev=15.35 00:10:30.968 lat (usec): min=190, max=404, avg=250.95, stdev=15.35 00:10:30.968 clat percentiles (usec): 00:10:30.968 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:10:30.968 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:10:30.968 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:10:30.968 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 326], 99.95th=[ 334], 00:10:30.968 | 99.99th=[ 396] 00:10:30.968 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:30.968 slat (nsec): min=8835, max=41011, avg=10086.11, stdev=1122.99 00:10:30.968 clat (usec): min=122, max=250, avg=160.65, stdev=22.82 00:10:30.968 lat (usec): min=132, max=262, avg=170.74, stdev=22.85 00:10:30.968 clat percentiles (usec): 00:10:30.968 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:10:30.968 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 153], 00:10:30.968 | 70.00th=[ 161], 80.00th=[ 180], 90.00th=[ 196], 95.00th=[ 206], 00:10:30.968 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 245], 99.95th=[ 249], 00:10:30.968 | 99.99th=[ 251] 00:10:30.968 bw ( KiB/s): min=10296, max=10296, per=42.52%, avg=10296.00, stdev= 0.00, samples=1 00:10:30.968 iops : min= 2574, max= 2574, avg=2574.00, stdev= 0.00, samples=1 00:10:30.968 lat (usec) : 250=85.85%, 500=14.15% 00:10:30.968 cpu : usr=2.40%, sys=4.20%, ctx=4762, majf=0, minf=2 00:10:30.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.968 issued rwts: total=2202,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.968 job3: (groupid=0, jobs=1): err= 0: pid=1252067: Thu Nov 28 08:09:12 2024 00:10:30.968 read: IOPS=28, BW=115KiB/s (118kB/s)(116KiB/1007msec) 00:10:30.968 slat (nsec): min=8183, max=26098, avg=20033.48, stdev=6473.36 00:10:30.968 clat (usec): min=251, max=41325, avg=31136.79, stdev=17718.33 00:10:30.968 lat (usec): min=259, max=41336, avg=31156.83, stdev=17722.03 00:10:30.968 clat percentiles (usec): 00:10:30.968 | 1.00th=[ 251], 5.00th=[ 251], 10.00th=[ 260], 20.00th=[ 289], 00:10:30.968 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:30.968 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:30.968 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:30.968 | 99.99th=[41157] 00:10:30.968 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:30.968 slat (nsec): min=10163, max=38608, avg=11911.41, stdev=1834.18 00:10:30.968 clat (usec): min=157, max=315, avg=185.61, stdev=15.72 00:10:30.968 lat (usec): min=169, max=354, avg=197.53, stdev=16.44 00:10:30.968 clat percentiles (usec): 00:10:30.968 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:10:30.968 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:10:30.968 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 215], 00:10:30.968 | 99.00th=[ 229], 99.50th=[ 249], 99.90th=[ 318], 99.95th=[ 318], 00:10:30.968 | 99.99th=[ 318] 00:10:30.968 bw ( KiB/s): min= 4096, max= 4096, per=16.92%, avg=4096.00, stdev= 0.00, samples=1 00:10:30.968 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:30.968 lat (usec) : 250=94.27%, 500=1.66% 00:10:30.968 lat (msec) : 50=4.07% 00:10:30.968 cpu : usr=1.19%, sys=0.20%, ctx=541, majf=0, minf=1 00:10:30.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.968 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.968 00:10:30.968 Run status group 0 (all jobs): 00:10:30.968 READ: bw=17.2MiB/s (18.1MB/s), 115KiB/s-8799KiB/s (118kB/s-9010kB/s), io=17.5MiB (18.4MB), run=1001-1015msec 00:10:30.968 WRITE: bw=23.6MiB/s (24.8MB/s), 2018KiB/s-9.99MiB/s (2066kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1015msec 00:10:30.968 00:10:30.968 Disk stats (read/write): 00:10:30.968 nvme0n1: ios=1964/2048, merge=0/0, ticks=1445/291, in_queue=1736, util=98.00% 00:10:30.968 nvme0n2: ios=74/512, merge=0/0, ticks=1734/89, in_queue=1823, util=98.37% 00:10:30.968 nvme0n3: ios=1942/2048, merge=0/0, ticks=460/331, in_queue=791, util=89.05% 00:10:30.968 nvme0n4: ios=24/512, merge=0/0, ticks=740/90, in_queue=830, util=89.71% 00:10:30.968 08:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:30.968 [global] 00:10:30.968 thread=1 00:10:30.968 invalidate=1 00:10:30.968 rw=write 00:10:30.968 time_based=1 00:10:30.968 runtime=1 00:10:30.968 ioengine=libaio 00:10:30.968 direct=1 00:10:30.968 bs=4096 00:10:30.968 iodepth=128 00:10:30.968 norandommap=0 00:10:30.968 numjobs=1 00:10:30.968 00:10:30.968 verify_dump=1 00:10:30.968 verify_backlog=512 00:10:30.968 verify_state_save=0 00:10:30.968 do_verify=1 00:10:30.968 verify=crc32c-intel 00:10:30.968 [job0] 00:10:30.968 filename=/dev/nvme0n1 00:10:30.968 [job1] 00:10:30.968 filename=/dev/nvme0n2 00:10:30.968 [job2] 00:10:30.968 filename=/dev/nvme0n3 00:10:30.968 [job3] 00:10:30.968 filename=/dev/nvme0n4 00:10:30.968 Could not set queue depth (nvme0n1) 00:10:30.968 Could not set queue depth (nvme0n2) 00:10:30.968 Could not set queue depth (nvme0n3) 00:10:30.968 Could not set queue depth (nvme0n4) 00:10:30.968 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.968 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.968 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.968 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.968 fio-3.35 00:10:30.968 Starting 4 threads 00:10:32.340 00:10:32.340 job0: (groupid=0, jobs=1): err= 0: pid=1252441: Thu Nov 28 08:09:14 2024 00:10:32.340 read: IOPS=4286, BW=16.7MiB/s (17.6MB/s)(17.5MiB/1044msec) 00:10:32.340 slat (nsec): min=1225, max=11906k, avg=105031.40, stdev=633217.78 00:10:32.340 clat (usec): min=5073, max=55718, avg=14574.89, stdev=7309.42 00:10:32.340 lat (usec): min=5081, max=57720, avg=14679.92, stdev=7335.44 00:10:32.340 clat percentiles (usec): 00:10:32.340 | 1.00th=[ 7373], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10290], 00:10:32.340 | 30.00th=[11600], 40.00th=[12518], 50.00th=[13042], 60.00th=[13173], 00:10:32.340 | 70.00th=[14091], 80.00th=[16319], 90.00th=[20055], 95.00th=[23462], 00:10:32.340 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:10:32.340 | 99.99th=[55837] 00:10:32.340 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:10:32.340 slat (nsec): min=1971, max=10484k, avg=108191.57, stdev=549170.17 00:10:32.340 clat (usec): min=5482, max=38451, avg=14440.91, stdev=5352.44 00:10:32.340 lat (usec): min=5499, max=38459, avg=14549.11, stdev=5391.60 00:10:32.340 clat percentiles (usec): 00:10:32.340 | 1.00th=[ 7046], 5.00th=[ 8717], 10.00th=[ 9896], 20.00th=[10159], 00:10:32.340 | 30.00th=[10421], 40.00th=[11469], 50.00th=[12518], 60.00th=[13304], 00:10:32.340 | 70.00th=[17957], 80.00th=[20055], 90.00th=[20317], 95.00th=[21103], 00:10:32.340 | 99.00th=[34866], 99.50th=[36439], 99.90th=[38536], 99.95th=[38536], 00:10:32.340 | 99.99th=[38536] 00:10:32.340 bw ( KiB/s): min=18344, max=18520, per=26.83%, avg=18432.00, stdev=124.45, samples=2 00:10:32.340 iops : min= 4586, max= 4630, avg=4608.00, stdev=31.11, samples=2 00:10:32.340 lat (msec) : 10=13.27%, 20=72.02%, 50=14.02%, 100=0.69% 00:10:32.340 cpu : usr=2.11%, sys=7.09%, ctx=452, majf=0, minf=1 00:10:32.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:32.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.340 issued rwts: total=4475,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.340 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.340 job1: (groupid=0, jobs=1): err= 0: pid=1252442: Thu Nov 28 08:09:14 2024 00:10:32.341 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:32.341 slat (nsec): min=1082, max=23484k, avg=156724.81, stdev=1041445.56 00:10:32.341 clat (usec): min=7723, max=83222, avg=17872.43, stdev=12794.99 00:10:32.341 lat (usec): min=7969, max=83230, avg=18029.16, stdev=12889.84 00:10:32.341 clat percentiles (usec): 00:10:32.341 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10552], 00:10:32.341 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:10:32.341 | 70.00th=[17695], 80.00th=[22938], 90.00th=[40633], 95.00th=[46924], 00:10:32.341 | 99.00th=[59507], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:10:32.341 | 99.99th=[83362] 00:10:32.341 write: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1004msec); 0 zone resets 00:10:32.341 slat (nsec): min=1883, max=22967k, avg=142464.69, stdev=903430.14 00:10:32.341 clat (usec): min=2457, max=96408, avg=19574.45, stdev=16842.01 00:10:32.341 lat (usec): min=4095, max=96416, avg=19716.91, stdev=16920.82 00:10:32.341 clat percentiles (usec): 00:10:32.341 | 1.00th=[ 8029], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10290], 00:10:32.341 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11994], 60.00th=[13435], 00:10:32.341 | 70.00th=[19530], 80.00th=[22152], 90.00th=[42206], 95.00th=[55313], 00:10:32.341 | 99.00th=[93848], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:10:32.341 | 99.99th=[95945] 00:10:32.341 bw ( KiB/s): min= 7760, max=19448, per=19.80%, avg=13604.00, stdev=8264.66, samples=2 00:10:32.341 iops : min= 1940, max= 4862, avg=3401.00, stdev=2066.17, samples=2 00:10:32.341 lat (msec) : 4=0.02%, 10=10.80%, 20=62.33%, 50=22.56%, 100=4.29% 00:10:32.341 cpu : usr=1.99%, sys=2.99%, ctx=395, majf=0, minf=1 00:10:32.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:32.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.341 issued rwts: total=3072,3528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.341 job2: (groupid=0, jobs=1): err= 0: pid=1252443: Thu Nov 28 08:09:14 2024 00:10:32.341 read: IOPS=4428, BW=17.3MiB/s (18.1MB/s)(18.1MiB/1049msec) 00:10:32.341 slat (nsec): min=1324, max=11912k, avg=105619.72, stdev=763091.41 00:10:32.341 clat (usec): min=4211, max=49476, avg=13443.04, stdev=4869.42 00:10:32.341 lat (usec): min=4218, max=49487, avg=13548.66, stdev=4916.00 00:10:32.341 clat percentiles (usec): 00:10:32.341 | 1.00th=[ 6718], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10945], 00:10:32.341 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12387], 60.00th=[13435], 00:10:32.341 | 70.00th=[13960], 80.00th=[14484], 90.00th=[17957], 95.00th=[20055], 00:10:32.341 | 99.00th=[30802], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:10:32.341 | 99.99th=[49546] 00:10:32.341 write: IOPS=4880, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1049msec); 0 zone resets 00:10:32.341 slat (usec): min=2, max=21399, avg=93.20, stdev=585.38 00:10:32.341 clat (usec): min=527, max=51171, avg=13415.41, stdev=7513.46 00:10:32.341 lat (usec): min=542, max=59949, avg=13508.61, stdev=7549.49 00:10:32.341 clat percentiles (usec): 00:10:32.341 | 1.00th=[ 3851], 5.00th=[ 6128], 10.00th=[ 8029], 20.00th=[10552], 00:10:32.341 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:10:32.341 | 70.00th=[12649], 80.00th=[14222], 90.00th=[20317], 95.00th=[33817], 00:10:32.341 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:10:32.341 | 99.99th=[51119] 00:10:32.341 bw ( KiB/s): min=17216, max=23016, per=29.28%, avg=20116.00, stdev=4101.22, samples=2 00:10:32.341 iops : min= 4304, max= 5754, avg=5029.00, stdev=1025.30, samples=2 00:10:32.341 lat (usec) : 750=0.03% 00:10:32.341 lat (msec) : 4=0.61%, 10=13.20%, 20=77.74%, 50=7.76%, 100=0.66% 00:10:32.341 cpu : usr=3.82%, sys=5.25%, ctx=529, majf=0, minf=1 00:10:32.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:32.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.341 issued rwts: total=4645,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.341 job3: (groupid=0, jobs=1): err= 0: pid=1252444: Thu Nov 28 08:09:14 2024 00:10:32.341 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:10:32.341 slat (nsec): min=1327, max=16540k, avg=110461.57, stdev=676501.20 00:10:32.341 clat (usec): min=7464, max=48255, avg=13718.47, stdev=4943.69 00:10:32.341 lat (usec): min=7735, max=48260, avg=13828.93, stdev=5004.13 00:10:32.341 clat percentiles (usec): 00:10:32.341 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[11207], 00:10:32.341 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12518], 60.00th=[12780], 00:10:32.341 | 70.00th=[13435], 80.00th=[14353], 90.00th=[18220], 95.00th=[25035], 00:10:32.341 | 99.00th=[36439], 99.50th=[41681], 99.90th=[48497], 99.95th=[48497], 00:10:32.341 | 99.99th=[48497] 00:10:32.341 write: IOPS=4736, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1005msec); 0 zone resets 00:10:32.341 slat (nsec): min=1833, max=10877k, avg=97048.24, stdev=518771.38 00:10:32.341 clat (usec): min=4966, max=49483, avg=13470.49, stdev=4889.32 00:10:32.341 lat (usec): min=4970, max=54138, avg=13567.54, stdev=4917.41 00:10:32.341 clat percentiles (usec): 00:10:32.341 | 1.00th=[ 7373], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[11338], 00:10:32.341 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12256], 60.00th=[12780], 00:10:32.341 | 70.00th=[13435], 80.00th=[14746], 90.00th=[16909], 95.00th=[19530], 00:10:32.341 | 99.00th=[44827], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:10:32.341 | 99.99th=[49546] 00:10:32.341 bw ( KiB/s): min=16384, max=20680, per=26.98%, avg=18532.00, stdev=3037.73, samples=2 00:10:32.341 iops : min= 4096, max= 5170, avg=4633.00, stdev=759.43, samples=2 00:10:32.341 lat (msec) : 10=6.96%, 20=86.74%, 50=6.30% 00:10:32.341 cpu : usr=3.78%, sys=5.47%, ctx=518, majf=0, minf=1 00:10:32.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:32.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.341 issued rwts: total=4608,4760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.341 00:10:32.341 Run status group 0 (all jobs): 00:10:32.341 READ: bw=62.6MiB/s (65.6MB/s), 12.0MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=65.6MiB (68.8MB), run=1004-1049msec 00:10:32.341 WRITE: bw=67.1MiB/s (70.3MB/s), 13.7MiB/s-19.1MiB/s (14.4MB/s-20.0MB/s), io=70.4MiB (73.8MB), run=1004-1049msec 00:10:32.341 00:10:32.341 Disk stats (read/write): 00:10:32.341 nvme0n1: ios=3825/4096, merge=0/0, ticks=27881/32090, in_queue=59971, util=97.70% 00:10:32.341 nvme0n2: ios=2822/3072, merge=0/0, ticks=14216/12677, in_queue=26893, util=98.27% 00:10:32.341 nvme0n3: ios=3865/4096, merge=0/0, ticks=45704/42961, in_queue=88665, util=99.06% 00:10:32.341 nvme0n4: ios=3677/4096, merge=0/0, ticks=26097/25456, in_queue=51553, util=89.61% 00:10:32.341 08:09:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:32.341 [global] 00:10:32.341 thread=1 00:10:32.341 invalidate=1 00:10:32.341 rw=randwrite 00:10:32.341 time_based=1 00:10:32.341 runtime=1 00:10:32.341 ioengine=libaio 00:10:32.341 direct=1 00:10:32.341 bs=4096 00:10:32.341 iodepth=128 00:10:32.341 norandommap=0 00:10:32.341 numjobs=1 00:10:32.341 00:10:32.341 verify_dump=1 00:10:32.341 verify_backlog=512 00:10:32.341 verify_state_save=0 00:10:32.341 do_verify=1 00:10:32.341 verify=crc32c-intel 00:10:32.341 [job0] 00:10:32.341 filename=/dev/nvme0n1 00:10:32.341 [job1] 00:10:32.341 filename=/dev/nvme0n2 00:10:32.341 [job2] 00:10:32.341 filename=/dev/nvme0n3 00:10:32.341 [job3] 00:10:32.341 filename=/dev/nvme0n4 00:10:32.341 Could not set queue depth (nvme0n1) 00:10:32.341 Could not set queue depth (nvme0n2) 00:10:32.341 Could not set queue depth (nvme0n3) 00:10:32.341 Could not set queue depth (nvme0n4) 00:10:32.600 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.600 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.600 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.600 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.600 fio-3.35 00:10:32.600 Starting 4 threads 00:10:33.972 00:10:33.972 job0: (groupid=0, jobs=1): err= 0: pid=1252814: Thu Nov 28 08:09:16 2024 00:10:33.972 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:10:33.972 slat (nsec): min=1351, max=8504.7k, avg=87123.19, stdev=551050.60 00:10:33.972 clat (usec): min=3578, max=21949, avg=10580.11, stdev=2750.90 00:10:33.972 lat (usec): min=3584, max=21957, avg=10667.23, stdev=2797.47 00:10:33.972 clat percentiles (usec): 00:10:33.972 | 1.00th=[ 5276], 5.00th=[ 7242], 10.00th=[ 7898], 20.00th=[ 8225], 00:10:33.972 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10552], 00:10:33.972 | 70.00th=[11600], 80.00th=[13173], 90.00th=[14353], 95.00th=[15795], 00:10:33.972 | 99.00th=[18744], 99.50th=[19792], 99.90th=[21890], 99.95th=[21890], 00:10:33.972 | 99.99th=[21890] 00:10:33.972 write: IOPS=4887, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1008msec); 0 zone resets 00:10:33.972 slat (nsec): min=1994, max=10299k, avg=115313.81, stdev=644142.43 00:10:33.972 clat (usec): min=358, max=107367, avg=16060.78, stdev=16970.77 00:10:33.972 lat (usec): min=1030, max=107382, avg=16176.09, stdev=17072.83 00:10:33.973 clat percentiles (msec): 00:10:33.973 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:10:33.973 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:10:33.973 | 70.00th=[ 13], 80.00th=[ 18], 90.00th=[ 35], 95.00th=[ 55], 00:10:33.973 | 99.00th=[ 95], 99.50th=[ 104], 99.90th=[ 108], 99.95th=[ 108], 00:10:33.973 | 99.99th=[ 108] 00:10:33.973 bw ( KiB/s): min=11696, max=26704, per=28.45%, avg=19200.00, stdev=10612.26, samples=2 00:10:33.973 iops : min= 2924, max= 6676, avg=4800.00, stdev=2653.06, samples=2 00:10:33.973 lat (usec) : 500=0.01% 00:10:33.973 lat (msec) : 2=0.29%, 4=0.94%, 10=44.72%, 20=45.72%, 50=4.92% 00:10:33.973 lat (msec) : 100=2.99%, 250=0.41% 00:10:33.973 cpu : usr=2.48%, sys=6.26%, ctx=574, majf=0, minf=1 00:10:33.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:33.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.973 issued rwts: total=4608,4927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.973 job1: (groupid=0, jobs=1): err= 0: pid=1252815: Thu Nov 28 08:09:16 2024 00:10:33.973 read: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec) 00:10:33.973 slat (nsec): min=1059, max=12618k, avg=85776.47, stdev=641141.75 00:10:33.973 clat (usec): min=3668, max=30763, avg=11374.87, stdev=3636.62 00:10:33.973 lat (usec): min=3688, max=30770, avg=11460.64, stdev=3686.03 00:10:33.973 clat percentiles (usec): 00:10:33.973 | 1.00th=[ 3916], 5.00th=[ 6718], 10.00th=[ 7701], 20.00th=[ 8848], 00:10:33.973 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11338], 00:10:33.973 | 70.00th=[12256], 80.00th=[12911], 90.00th=[15139], 95.00th=[18220], 00:10:33.973 | 99.00th=[26870], 99.50th=[28705], 99.90th=[30802], 99.95th=[30802], 00:10:33.973 | 99.99th=[30802] 00:10:33.973 write: IOPS=5223, BW=20.4MiB/s (21.4MB/s)(20.6MiB/1012msec); 0 zone resets 00:10:33.973 slat (nsec): min=1777, max=13928k, avg=95314.57, stdev=571112.19 00:10:33.973 clat (usec): min=2658, max=41063, avg=13218.50, stdev=7781.92 00:10:33.973 lat (usec): min=2668, max=41071, avg=13313.82, stdev=7839.09 00:10:33.973 clat percentiles (usec): 00:10:33.973 | 1.00th=[ 4817], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[ 8717], 00:10:33.973 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10421], 00:10:33.973 | 70.00th=[11076], 80.00th=[18482], 90.00th=[25297], 95.00th=[33424], 00:10:33.973 | 99.00th=[39584], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:10:33.973 | 99.99th=[41157] 00:10:33.973 bw ( KiB/s): min=16696, max=24576, per=30.58%, avg=20636.00, stdev=5572.00, samples=2 00:10:33.973 iops : min= 4174, max= 6144, avg=5159.00, stdev=1393.00, samples=2 00:10:33.973 lat (msec) : 4=0.85%, 10=35.49%, 20=53.99%, 50=9.68% 00:10:33.973 cpu : usr=3.96%, sys=5.14%, ctx=463, majf=0, minf=1 00:10:33.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:33.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.973 issued rwts: total=5120,5286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.973 job2: (groupid=0, jobs=1): err= 0: pid=1252816: Thu Nov 28 08:09:16 2024 00:10:33.973 read: IOPS=2794, BW=10.9MiB/s (11.4MB/s)(11.4MiB/1043msec) 00:10:33.973 slat (nsec): min=1185, max=34671k, avg=134523.12, stdev=1147726.07 00:10:33.973 clat (usec): min=5276, max=67368, avg=19339.76, stdev=14095.56 00:10:33.973 lat (usec): min=5289, max=67380, avg=19474.29, stdev=14158.15 00:10:33.973 clat percentiles (usec): 00:10:33.973 | 1.00th=[ 8455], 5.00th=[10290], 10.00th=[10814], 20.00th=[10945], 00:10:33.973 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[13042], 00:10:33.973 | 70.00th=[22152], 80.00th=[23987], 90.00th=[44827], 95.00th=[53740], 00:10:33.973 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:10:33.973 | 99.99th=[67634] 00:10:33.973 write: IOPS=2947, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec); 0 zone resets 00:10:33.973 slat (usec): min=2, max=27119, avg=189.76, stdev=1574.78 00:10:33.973 clat (usec): min=3839, max=84104, avg=23205.59, stdev=20996.77 00:10:33.973 lat (usec): min=3850, max=84125, avg=23395.35, stdev=21121.34 00:10:33.973 clat percentiles (usec): 00:10:33.973 | 1.00th=[ 6652], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[ 9896], 00:10:33.973 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11994], 60.00th=[15139], 00:10:33.973 | 70.00th=[22676], 80.00th=[32637], 90.00th=[60031], 95.00th=[72877], 00:10:33.973 | 99.00th=[84411], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:10:33.973 | 99.99th=[84411] 00:10:33.973 bw ( KiB/s): min= 7176, max=17416, per=18.22%, avg=12296.00, stdev=7240.77, samples=2 00:10:33.973 iops : min= 1794, max= 4354, avg=3074.00, stdev=1810.19, samples=2 00:10:33.973 lat (msec) : 4=0.10%, 10=11.62%, 20=54.58%, 50=22.49%, 100=11.20% 00:10:33.973 cpu : usr=3.07%, sys=3.93%, ctx=143, majf=0, minf=1 00:10:33.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:33.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.973 issued rwts: total=2915,3074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.973 job3: (groupid=0, jobs=1): err= 0: pid=1252817: Thu Nov 28 08:09:16 2024 00:10:33.973 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:10:33.973 slat (nsec): min=1423, max=11961k, avg=110159.55, stdev=788301.66 00:10:33.973 clat (usec): min=4250, max=30774, avg=13615.46, stdev=3410.00 00:10:33.973 lat (usec): min=4260, max=30777, avg=13725.62, stdev=3471.83 00:10:33.973 clat percentiles (usec): 00:10:33.973 | 1.00th=[ 6587], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[11338], 00:10:33.973 | 30.00th=[11600], 40.00th=[12125], 50.00th=[13304], 60.00th=[14222], 00:10:33.973 | 70.00th=[15008], 80.00th=[16057], 90.00th=[17433], 95.00th=[19792], 00:10:33.973 | 99.00th=[25297], 99.50th=[29230], 99.90th=[30802], 99.95th=[30802], 00:10:33.973 | 99.99th=[30802] 00:10:33.973 write: IOPS=4267, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1010msec); 0 zone resets 00:10:33.973 slat (usec): min=2, max=11942, avg=121.18, stdev=751.59 00:10:33.973 clat (usec): min=1606, max=102789, avg=16789.68, stdev=15998.17 00:10:33.973 lat (usec): min=1620, max=102802, avg=16910.85, stdev=16114.38 00:10:33.973 clat percentiles (msec): 00:10:33.973 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 10], 00:10:33.973 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:10:33.973 | 70.00th=[ 15], 80.00th=[ 20], 90.00th=[ 25], 95.00th=[ 55], 00:10:33.973 | 99.00th=[ 93], 99.50th=[ 100], 99.90th=[ 104], 99.95th=[ 104], 00:10:33.973 | 99.99th=[ 104] 00:10:33.973 bw ( KiB/s): min=14128, max=19336, per=24.79%, avg=16732.00, stdev=3682.61, samples=2 00:10:33.973 iops : min= 3532, max= 4834, avg=4183.00, stdev=920.65, samples=2 00:10:33.973 lat (msec) : 2=0.02%, 4=0.64%, 10=15.70%, 20=71.13%, 50=9.85% 00:10:33.973 lat (msec) : 100=2.40%, 250=0.25% 00:10:33.973 cpu : usr=3.37%, sys=5.55%, ctx=421, majf=0, minf=1 00:10:33.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:33.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.973 issued rwts: total=4096,4310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.973 00:10:33.973 Run status group 0 (all jobs): 00:10:33.973 READ: bw=62.7MiB/s (65.7MB/s), 10.9MiB/s-19.8MiB/s (11.4MB/s-20.7MB/s), io=65.4MiB (68.6MB), run=1008-1043msec 00:10:33.973 WRITE: bw=65.9MiB/s (69.1MB/s), 11.5MiB/s-20.4MiB/s (12.1MB/s-21.4MB/s), io=68.7MiB (72.1MB), run=1008-1043msec 00:10:33.973 00:10:33.974 Disk stats (read/write): 00:10:33.974 nvme0n1: ios=4146/4511, merge=0/0, ticks=28714/40053, in_queue=68767, util=95.29% 00:10:33.974 nvme0n2: ios=4621/4695, merge=0/0, ticks=36433/38740, in_queue=75173, util=87.02% 00:10:33.974 nvme0n3: ios=2065/2560, merge=0/0, ticks=23045/27153, in_queue=50198, util=100.00% 00:10:33.974 nvme0n4: ios=3115/3559, merge=0/0, ticks=41816/63643, in_queue=105459, util=93.30% 00:10:33.974 08:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:33.974 08:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1253049 00:10:33.974 08:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:33.974 08:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:33.974 [global] 00:10:33.974 thread=1 00:10:33.974 invalidate=1 00:10:33.974 rw=read 00:10:33.974 time_based=1 00:10:33.974 runtime=10 00:10:33.974 ioengine=libaio 00:10:33.974 direct=1 00:10:33.974 bs=4096 00:10:33.974 iodepth=1 00:10:33.974 norandommap=1 00:10:33.974 numjobs=1 00:10:33.974 00:10:33.974 [job0] 00:10:33.974 filename=/dev/nvme0n1 00:10:33.974 [job1] 00:10:33.974 filename=/dev/nvme0n2 00:10:33.974 [job2] 00:10:33.974 filename=/dev/nvme0n3 00:10:33.974 [job3] 00:10:33.974 filename=/dev/nvme0n4 00:10:33.974 Could not set queue depth (nvme0n1) 00:10:33.974 Could not set queue depth (nvme0n2) 00:10:33.974 Could not set queue depth (nvme0n3) 00:10:33.974 Could not set queue depth (nvme0n4) 00:10:34.231 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.231 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.231 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.231 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.231 fio-3.35 00:10:34.231 Starting 4 threads 00:10:37.510 08:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:37.510 08:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:37.510 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=5750784, buflen=4096 00:10:37.510 fio: pid=1253195, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.510 08:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.510 08:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:37.510 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2646016, buflen=4096 00:10:37.510 fio: pid=1253194, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.510 08:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.510 08:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:37.510 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=315392, buflen=4096 00:10:37.510 fio: pid=1253192, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.775 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5636096, buflen=4096 00:10:37.775 fio: pid=1253193, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.775 08:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.775 08:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:37.775 00:10:37.775 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1253192: Thu Nov 28 08:09:19 2024 00:10:37.775 read: IOPS=24, BW=98.2KiB/s (101kB/s)(308KiB/3138msec) 00:10:37.775 slat (nsec): min=10151, max=67161, avg=22831.33, stdev=6610.22 00:10:37.775 clat (usec): min=396, max=41101, avg=40439.98, stdev=4623.85 00:10:37.775 lat (usec): min=452, max=41123, avg=40462.83, stdev=4620.05 00:10:37.775 clat percentiles (usec): 00:10:37.775 | 1.00th=[ 396], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:37.775 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:37.775 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:37.775 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:37.775 | 99.99th=[41157] 00:10:37.775 bw ( KiB/s): min= 93, max= 104, per=2.34%, avg=98.17, stdev= 4.67, samples=6 00:10:37.775 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:10:37.775 lat (usec) : 500=1.28% 00:10:37.775 lat (msec) : 50=97.44% 00:10:37.775 cpu : usr=0.13%, sys=0.00%, ctx=80, majf=0, minf=1 00:10:37.775 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.775 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.775 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.775 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.775 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1253193: Thu Nov 28 08:09:19 2024 00:10:37.775 read: IOPS=411, BW=1645KiB/s (1684kB/s)(5504KiB/3346msec) 00:10:37.775 slat (usec): min=6, max=12705, avg=17.54, stdev=342.20 00:10:37.775 clat (usec): min=193, max=42028, avg=2397.03, stdev=9142.66 00:10:37.775 lat (usec): min=200, max=53981, avg=2414.56, stdev=9191.45 00:10:37.775 clat percentiles (usec): 00:10:37.775 | 1.00th=[ 212], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 239], 00:10:37.775 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:10:37.775 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[40633], 00:10:37.775 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:37.775 | 99.99th=[42206] 00:10:37.775 bw ( KiB/s): min= 93, max=10440, per=43.53%, avg=1823.50, stdev=4221.21, samples=6 00:10:37.775 iops : min= 23, max= 2610, avg=455.83, stdev=1055.32, samples=6 00:10:37.775 lat (usec) : 250=48.51%, 500=46.11%, 750=0.07% 00:10:37.775 lat (msec) : 50=5.23% 00:10:37.775 cpu : usr=0.21%, sys=0.30%, ctx=1379, majf=0, minf=2 00:10:37.775 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.775 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.775 issued rwts: total=1377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.775 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.775 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1253194: Thu Nov 28 08:09:19 2024 00:10:37.775 read: IOPS=220, BW=879KiB/s (900kB/s)(2584KiB/2939msec) 00:10:37.775 slat (nsec): min=6650, max=35025, avg=9240.41, stdev=5028.29 00:10:37.775 clat (usec): min=198, max=42030, avg=4505.19, stdev=12550.79 00:10:37.775 lat (usec): min=206, max=42053, avg=4514.41, stdev=12555.50 00:10:37.775 clat percentiles (usec): 00:10:37.775 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 223], 00:10:37.775 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:10:37.775 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[40633], 95.00th=[41157], 00:10:37.775 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:37.775 | 99.99th=[42206] 00:10:37.775 bw ( KiB/s): min= 96, max= 4696, per=24.29%, avg=1017.60, stdev=2056.29, samples=5 00:10:37.775 iops : min= 24, max= 1174, avg=254.40, stdev=514.07, samples=5 00:10:37.775 lat (usec) : 250=62.60%, 500=26.89% 00:10:37.775 lat (msec) : 50=10.36% 00:10:37.775 cpu : usr=0.14%, sys=0.17%, ctx=647, majf=0, minf=2 00:10:37.775 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.775 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.775 issued rwts: total=647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.775 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.775 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1253195: Thu Nov 28 08:09:19 2024 00:10:37.775 read: IOPS=514, BW=2056KiB/s (2106kB/s)(5616KiB/2731msec) 00:10:37.775 slat (nsec): min=6761, max=30498, avg=8552.10, stdev=3210.94 00:10:37.775 clat (usec): min=195, max=42006, avg=1916.99, stdev=8161.77 00:10:37.776 lat (usec): min=205, max=42028, avg=1925.53, stdev=8164.48 00:10:37.776 clat percentiles (usec): 00:10:37.776 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 212], 00:10:37.776 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:10:37.776 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 258], 00:10:37.776 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:37.776 | 99.99th=[42206] 00:10:37.776 bw ( KiB/s): min= 96, max= 9488, per=47.19%, avg=1976.00, stdev=4199.34, samples=5 00:10:37.776 iops : min= 24, max= 2372, avg=494.00, stdev=1049.83, samples=5 00:10:37.776 lat (usec) : 250=93.88%, 500=1.92% 00:10:37.776 lat (msec) : 50=4.13% 00:10:37.776 cpu : usr=0.11%, sys=0.55%, ctx=1408, majf=0, minf=2 00:10:37.776 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.776 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.776 issued rwts: total=1405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.776 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.776 00:10:37.776 Run status group 0 (all jobs): 00:10:37.776 READ: bw=4188KiB/s (4288kB/s), 98.2KiB/s-2056KiB/s (101kB/s-2106kB/s), io=13.7MiB (14.3MB), run=2731-3346msec 00:10:37.776 00:10:37.776 Disk stats (read/write): 00:10:37.776 nvme0n1: ios=76/0, merge=0/0, ticks=3075/0, in_queue=3075, util=95.75% 00:10:37.776 nvme0n2: ios=1370/0, merge=0/0, ticks=3050/0, in_queue=3050, util=95.76% 00:10:37.776 nvme0n3: ios=644/0, merge=0/0, ticks=2828/0, in_queue=2828, util=96.52% 00:10:37.776 nvme0n4: ios=1448/0, merge=0/0, ticks=3770/0, in_queue=3770, util=99.22% 00:10:38.034 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.034 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:38.292 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.292 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:38.550 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.550 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:38.550 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.550 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:38.808 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:38.808 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1253049 00:10:38.808 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:38.808 08:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.066 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.066 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:39.066 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:39.066 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.066 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:39.066 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.066 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:39.066 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:39.067 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:39.067 nvmf hotplug test: fio failed as expected 00:10:39.067 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.326 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:39.326 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:39.326 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:39.326 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:39.326 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:39.326 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:39.326 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:39.326 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.326 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:39.326 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.327 rmmod nvme_tcp 00:10:39.327 rmmod nvme_fabrics 00:10:39.327 rmmod nvme_keyring 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1249792 ']' 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1249792 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1249792 ']' 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1249792 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1249792 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1249792' 00:10:39.327 killing process with pid 1249792 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1249792 00:10:39.327 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1249792 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.586 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.485 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:41.485 00:10:41.485 real 0m26.496s 00:10:41.485 user 1m47.113s 00:10:41.485 sys 0m7.759s 00:10:41.485 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.485 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.485 ************************************ 00:10:41.485 END TEST nvmf_fio_target 00:10:41.485 ************************************ 00:10:41.485 08:09:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:41.485 08:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.485 08:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.485 08:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.745 ************************************ 00:10:41.745 START TEST nvmf_bdevio 00:10:41.745 ************************************ 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:41.745 * Looking for test storage... 00:10:41.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:41.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.745 --rc genhtml_branch_coverage=1 00:10:41.745 --rc genhtml_function_coverage=1 00:10:41.745 --rc genhtml_legend=1 00:10:41.745 --rc geninfo_all_blocks=1 00:10:41.745 --rc geninfo_unexecuted_blocks=1 00:10:41.745 00:10:41.745 ' 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:41.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.745 --rc genhtml_branch_coverage=1 00:10:41.745 --rc genhtml_function_coverage=1 00:10:41.745 --rc genhtml_legend=1 00:10:41.745 --rc geninfo_all_blocks=1 00:10:41.745 --rc geninfo_unexecuted_blocks=1 00:10:41.745 00:10:41.745 ' 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:41.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.745 --rc genhtml_branch_coverage=1 00:10:41.745 --rc genhtml_function_coverage=1 00:10:41.745 --rc genhtml_legend=1 00:10:41.745 --rc geninfo_all_blocks=1 00:10:41.745 --rc geninfo_unexecuted_blocks=1 00:10:41.745 00:10:41.745 ' 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:41.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.745 --rc genhtml_branch_coverage=1 00:10:41.745 --rc genhtml_function_coverage=1 00:10:41.745 --rc genhtml_legend=1 00:10:41.745 --rc geninfo_all_blocks=1 00:10:41.745 --rc geninfo_unexecuted_blocks=1 00:10:41.745 00:10:41.745 ' 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.745 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.746 08:09:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:47.009 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:47.009 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.009 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:47.010 Found net devices under 0000:86:00.0: cvl_0_0 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:47.010 Found net devices under 0000:86:00.1: cvl_0_1 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.010 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:10:47.268 00:10:47.268 --- 10.0.0.2 ping statistics --- 00:10:47.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.268 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:10:47.268 00:10:47.268 --- 10.0.0.1 ping statistics --- 00:10:47.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.268 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1257494 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1257494 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1257494 ']' 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.268 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.268 [2024-11-28 08:09:29.506362] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:10:47.268 [2024-11-28 08:09:29.506410] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.525 [2024-11-28 08:09:29.574822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.525 [2024-11-28 08:09:29.617576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.526 [2024-11-28 08:09:29.617615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.526 [2024-11-28 08:09:29.617623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.526 [2024-11-28 08:09:29.617629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.526 [2024-11-28 08:09:29.617636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.526 [2024-11-28 08:09:29.619252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:47.526 [2024-11-28 08:09:29.619361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:47.526 [2024-11-28 08:09:29.619468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.526 [2024-11-28 08:09:29.619469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.526 [2024-11-28 08:09:29.756828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.526 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.782 Malloc0 00:10:47.782 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.782 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:47.782 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.782 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.782 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.782 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.782 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.782 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.782 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.782 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.783 [2024-11-28 08:09:29.820296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:47.783 { 00:10:47.783 "params": { 00:10:47.783 "name": "Nvme$subsystem", 00:10:47.783 "trtype": "$TEST_TRANSPORT", 00:10:47.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:47.783 "adrfam": "ipv4", 00:10:47.783 "trsvcid": "$NVMF_PORT", 00:10:47.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:47.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:47.783 "hdgst": ${hdgst:-false}, 00:10:47.783 "ddgst": ${ddgst:-false} 00:10:47.783 }, 00:10:47.783 "method": "bdev_nvme_attach_controller" 00:10:47.783 } 00:10:47.783 EOF 00:10:47.783 )") 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:47.783 08:09:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:47.783 "params": { 00:10:47.783 "name": "Nvme1", 00:10:47.783 "trtype": "tcp", 00:10:47.783 "traddr": "10.0.0.2", 00:10:47.783 "adrfam": "ipv4", 00:10:47.783 "trsvcid": "4420", 00:10:47.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:47.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:47.783 "hdgst": false, 00:10:47.783 "ddgst": false 00:10:47.783 }, 00:10:47.783 "method": "bdev_nvme_attach_controller" 00:10:47.783 }' 00:10:47.783 [2024-11-28 08:09:29.872449] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:10:47.783 [2024-11-28 08:09:29.872491] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257685 ] 00:10:47.783 [2024-11-28 08:09:29.935790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:47.783 [2024-11-28 08:09:29.979893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.783 [2024-11-28 08:09:29.979994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.783 [2024-11-28 08:09:29.979997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.040 I/O targets: 00:10:48.040 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:48.040 00:10:48.040 00:10:48.040 CUnit - A unit testing framework for C - Version 2.1-3 00:10:48.040 http://cunit.sourceforge.net/ 00:10:48.040 00:10:48.040 00:10:48.040 Suite: bdevio tests on: Nvme1n1 00:10:48.297 Test: blockdev write read block ...passed 00:10:48.297 Test: blockdev write zeroes read block ...passed 00:10:48.297 Test: blockdev write zeroes read no split ...passed 00:10:48.297 Test: blockdev write zeroes read split ...passed 00:10:48.297 Test: blockdev write zeroes read split partial ...passed 00:10:48.297 Test: blockdev reset ...[2024-11-28 08:09:30.453601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:48.298 [2024-11-28 08:09:30.453665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13de350 (9): Bad file descriptor 00:10:48.298 [2024-11-28 08:09:30.468426] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:48.298 passed 00:10:48.298 Test: blockdev write read 8 blocks ...passed 00:10:48.298 Test: blockdev write read size > 128k ...passed 00:10:48.298 Test: blockdev write read invalid size ...passed 00:10:48.298 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:48.298 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:48.298 Test: blockdev write read max offset ...passed 00:10:48.555 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:48.555 Test: blockdev writev readv 8 blocks ...passed 00:10:48.555 Test: blockdev writev readv 30 x 1block ...passed 00:10:48.555 Test: blockdev writev readv block ...passed 00:10:48.555 Test: blockdev writev readv size > 128k ...passed 00:10:48.555 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:48.555 Test: blockdev comparev and writev ...[2024-11-28 08:09:30.679782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:48.555 [2024-11-28 08:09:30.679809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:48.555 [2024-11-28 08:09:30.679823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:48.555 [2024-11-28 08:09:30.679831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:48.555 [2024-11-28 08:09:30.680084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:48.555 [2024-11-28 08:09:30.680095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:48.555 [2024-11-28 08:09:30.680107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:48.555 [2024-11-28 08:09:30.680114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:48.555 [2024-11-28 08:09:30.680346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:48.555 [2024-11-28 08:09:30.680361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:48.555 [2024-11-28 08:09:30.680373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:48.555 [2024-11-28 08:09:30.680380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:48.555 [2024-11-28 08:09:30.680621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:48.555 [2024-11-28 08:09:30.680631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:48.555 [2024-11-28 08:09:30.680643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:48.555 [2024-11-28 08:09:30.680650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:48.555 passed 00:10:48.555 Test: blockdev nvme passthru rw ...passed 00:10:48.555 Test: blockdev nvme passthru vendor specific ...[2024-11-28 08:09:30.763374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:48.555 [2024-11-28 08:09:30.763391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:48.555 [2024-11-28 08:09:30.763506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:48.555 [2024-11-28 08:09:30.763517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:48.555 [2024-11-28 08:09:30.763628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:48.555 [2024-11-28 08:09:30.763637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:48.556 [2024-11-28 08:09:30.763750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:48.556 [2024-11-28 08:09:30.763760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:48.556 passed 00:10:48.556 Test: blockdev nvme admin passthru ...passed 00:10:48.556 Test: blockdev copy ...passed 00:10:48.556 00:10:48.556 Run Summary: Type Total Ran Passed Failed Inactive 00:10:48.556 suites 1 1 n/a 0 0 00:10:48.556 tests 23 23 23 0 0 00:10:48.556 asserts 152 152 152 0 n/a 00:10:48.556 00:10:48.556 Elapsed time = 1.031 seconds 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.814 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.814 rmmod nvme_tcp 00:10:48.814 rmmod nvme_fabrics 00:10:48.814 rmmod nvme_keyring 00:10:48.814 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.814 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:48.814 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:48.814 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1257494 ']' 00:10:48.814 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1257494 00:10:48.814 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1257494 ']' 00:10:48.814 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1257494 00:10:48.814 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:48.814 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.814 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1257494 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1257494' 00:10:49.074 killing process with pid 1257494 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1257494 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1257494 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.074 08:09:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:51.614 00:10:51.614 real 0m9.577s 00:10:51.614 user 0m10.575s 00:10:51.614 sys 0m4.587s 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.614 ************************************ 00:10:51.614 END TEST nvmf_bdevio 00:10:51.614 ************************************ 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:51.614 00:10:51.614 real 4m28.437s 00:10:51.614 user 10m23.102s 00:10:51.614 sys 1m34.297s 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:51.614 ************************************ 00:10:51.614 END TEST nvmf_target_core 00:10:51.614 ************************************ 00:10:51.614 08:09:33 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:51.614 08:09:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.614 08:09:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.614 08:09:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:51.614 ************************************ 00:10:51.614 START TEST nvmf_target_extra 00:10:51.614 ************************************ 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:51.614 * Looking for test storage... 00:10:51.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:51.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.614 --rc genhtml_branch_coverage=1 00:10:51.614 --rc genhtml_function_coverage=1 00:10:51.614 --rc genhtml_legend=1 00:10:51.614 --rc geninfo_all_blocks=1 00:10:51.614 --rc geninfo_unexecuted_blocks=1 00:10:51.614 00:10:51.614 ' 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:51.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.614 --rc genhtml_branch_coverage=1 00:10:51.614 --rc genhtml_function_coverage=1 00:10:51.614 --rc genhtml_legend=1 00:10:51.614 --rc geninfo_all_blocks=1 00:10:51.614 --rc geninfo_unexecuted_blocks=1 00:10:51.614 00:10:51.614 ' 00:10:51.614 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:51.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.614 --rc genhtml_branch_coverage=1 00:10:51.614 --rc genhtml_function_coverage=1 00:10:51.614 --rc genhtml_legend=1 00:10:51.614 --rc geninfo_all_blocks=1 00:10:51.614 --rc geninfo_unexecuted_blocks=1 00:10:51.614 00:10:51.614 ' 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:51.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.615 --rc genhtml_branch_coverage=1 00:10:51.615 --rc genhtml_function_coverage=1 00:10:51.615 --rc genhtml_legend=1 00:10:51.615 --rc geninfo_all_blocks=1 00:10:51.615 --rc geninfo_unexecuted_blocks=1 00:10:51.615 00:10:51.615 ' 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:51.615 ************************************ 00:10:51.615 START TEST nvmf_example 00:10:51.615 ************************************ 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:51.615 * Looking for test storage... 00:10:51.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.615 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:51.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.616 --rc genhtml_branch_coverage=1 00:10:51.616 --rc genhtml_function_coverage=1 00:10:51.616 --rc genhtml_legend=1 00:10:51.616 --rc geninfo_all_blocks=1 00:10:51.616 --rc geninfo_unexecuted_blocks=1 00:10:51.616 00:10:51.616 ' 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:51.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.616 --rc genhtml_branch_coverage=1 00:10:51.616 --rc genhtml_function_coverage=1 00:10:51.616 --rc genhtml_legend=1 00:10:51.616 --rc geninfo_all_blocks=1 00:10:51.616 --rc geninfo_unexecuted_blocks=1 00:10:51.616 00:10:51.616 ' 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:51.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.616 --rc genhtml_branch_coverage=1 00:10:51.616 --rc genhtml_function_coverage=1 00:10:51.616 --rc genhtml_legend=1 00:10:51.616 --rc geninfo_all_blocks=1 00:10:51.616 --rc geninfo_unexecuted_blocks=1 00:10:51.616 00:10:51.616 ' 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:51.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.616 --rc genhtml_branch_coverage=1 00:10:51.616 --rc genhtml_function_coverage=1 00:10:51.616 --rc genhtml_legend=1 00:10:51.616 --rc geninfo_all_blocks=1 00:10:51.616 --rc geninfo_unexecuted_blocks=1 00:10:51.616 00:10:51.616 ' 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:51.616 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.875 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:51.875 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:51.875 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:51.875 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.875 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.875 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.875 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:51.875 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:51.875 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:51.875 08:09:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.148 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.148 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.148 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.148 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.148 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.148 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.148 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.148 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.148 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:57.149 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:57.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:57.149 Found net devices under 0000:86:00.0: cvl_0_0 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:57.149 Found net devices under 0000:86:00.1: cvl_0_1 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.149 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:10:57.150 00:10:57.150 --- 10.0.0.2 ping statistics --- 00:10:57.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.150 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:10:57.150 00:10:57.150 --- 10.0.0.1 ping statistics --- 00:10:57.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.150 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.150 08:09:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1261284 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1261284 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1261284 ']' 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.150 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.715 08:09:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:57.973 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:07.940 Initializing NVMe Controllers 00:11:07.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:07.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:07.940 Initialization complete. Launching workers. 00:11:07.940 ======================================================== 00:11:07.940 Latency(us) 00:11:07.940 Device Information : IOPS MiB/s Average min max 00:11:07.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17528.44 68.47 3650.44 610.52 15492.53 00:11:07.940 ======================================================== 00:11:07.940 Total : 17528.44 68.47 3650.44 610.52 15492.53 00:11:07.940 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.200 rmmod nvme_tcp 00:11:08.200 rmmod nvme_fabrics 00:11:08.200 rmmod nvme_keyring 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1261284 ']' 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1261284 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1261284 ']' 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1261284 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1261284 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1261284' 00:11:08.200 killing process with pid 1261284 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1261284 00:11:08.200 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1261284 00:11:08.460 nvmf threads initialize successfully 00:11:08.460 bdev subsystem init successfully 00:11:08.460 created a nvmf target service 00:11:08.460 create targets's poll groups done 00:11:08.460 all subsystems of target started 00:11:08.460 nvmf target is running 00:11:08.460 all subsystems of target stopped 00:11:08.460 destroy targets's poll groups done 00:11:08.460 destroyed the nvmf target service 00:11:08.460 bdev subsystem finish successfully 00:11:08.460 nvmf threads destroy successfully 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.460 08:09:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.366 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.366 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:10.366 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.366 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:10.626 00:11:10.626 real 0m18.971s 00:11:10.626 user 0m45.689s 00:11:10.626 sys 0m5.433s 00:11:10.626 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:10.627 ************************************ 00:11:10.627 END TEST nvmf_example 00:11:10.627 ************************************ 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:10.627 ************************************ 00:11:10.627 START TEST nvmf_filesystem 00:11:10.627 ************************************ 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:10.627 * Looking for test storage... 00:11:10.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.627 --rc genhtml_branch_coverage=1 00:11:10.627 --rc genhtml_function_coverage=1 00:11:10.627 --rc genhtml_legend=1 00:11:10.627 --rc geninfo_all_blocks=1 00:11:10.627 --rc geninfo_unexecuted_blocks=1 00:11:10.627 00:11:10.627 ' 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.627 --rc genhtml_branch_coverage=1 00:11:10.627 --rc genhtml_function_coverage=1 00:11:10.627 --rc genhtml_legend=1 00:11:10.627 --rc geninfo_all_blocks=1 00:11:10.627 --rc geninfo_unexecuted_blocks=1 00:11:10.627 00:11:10.627 ' 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.627 --rc genhtml_branch_coverage=1 00:11:10.627 --rc genhtml_function_coverage=1 00:11:10.627 --rc genhtml_legend=1 00:11:10.627 --rc geninfo_all_blocks=1 00:11:10.627 --rc geninfo_unexecuted_blocks=1 00:11:10.627 00:11:10.627 ' 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.627 --rc genhtml_branch_coverage=1 00:11:10.627 --rc genhtml_function_coverage=1 00:11:10.627 --rc genhtml_legend=1 00:11:10.627 --rc geninfo_all_blocks=1 00:11:10.627 --rc geninfo_unexecuted_blocks=1 00:11:10.627 00:11:10.627 ' 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:10.627 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:10.628 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:10.891 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:10.891 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:10.891 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:10.891 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:10.891 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:10.891 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:10.891 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:10.892 #define SPDK_CONFIG_H 00:11:10.892 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:10.892 #define SPDK_CONFIG_APPS 1 00:11:10.892 #define SPDK_CONFIG_ARCH native 00:11:10.892 #undef SPDK_CONFIG_ASAN 00:11:10.892 #undef SPDK_CONFIG_AVAHI 00:11:10.892 #undef SPDK_CONFIG_CET 00:11:10.892 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:10.892 #define SPDK_CONFIG_COVERAGE 1 00:11:10.892 #define SPDK_CONFIG_CROSS_PREFIX 00:11:10.892 #undef SPDK_CONFIG_CRYPTO 00:11:10.892 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:10.892 #undef SPDK_CONFIG_CUSTOMOCF 00:11:10.892 #undef SPDK_CONFIG_DAOS 00:11:10.892 #define SPDK_CONFIG_DAOS_DIR 00:11:10.892 #define SPDK_CONFIG_DEBUG 1 00:11:10.892 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:10.892 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:10.892 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:10.892 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:10.892 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:10.892 #undef SPDK_CONFIG_DPDK_UADK 00:11:10.892 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:10.892 #define SPDK_CONFIG_EXAMPLES 1 00:11:10.892 #undef SPDK_CONFIG_FC 00:11:10.892 #define SPDK_CONFIG_FC_PATH 00:11:10.892 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:10.892 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:10.892 #define SPDK_CONFIG_FSDEV 1 00:11:10.892 #undef SPDK_CONFIG_FUSE 00:11:10.892 #undef SPDK_CONFIG_FUZZER 00:11:10.892 #define SPDK_CONFIG_FUZZER_LIB 00:11:10.892 #undef SPDK_CONFIG_GOLANG 00:11:10.892 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:10.892 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:10.892 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:10.892 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:10.892 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:10.892 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:10.892 #undef SPDK_CONFIG_HAVE_LZ4 00:11:10.892 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:10.892 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:10.892 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:10.892 #define SPDK_CONFIG_IDXD 1 00:11:10.892 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:10.892 #undef SPDK_CONFIG_IPSEC_MB 00:11:10.892 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:10.892 #define SPDK_CONFIG_ISAL 1 00:11:10.892 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:10.892 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:10.892 #define SPDK_CONFIG_LIBDIR 00:11:10.892 #undef SPDK_CONFIG_LTO 00:11:10.892 #define SPDK_CONFIG_MAX_LCORES 128 00:11:10.892 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:10.892 #define SPDK_CONFIG_NVME_CUSE 1 00:11:10.892 #undef SPDK_CONFIG_OCF 00:11:10.892 #define SPDK_CONFIG_OCF_PATH 00:11:10.892 #define SPDK_CONFIG_OPENSSL_PATH 00:11:10.892 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:10.892 #define SPDK_CONFIG_PGO_DIR 00:11:10.892 #undef SPDK_CONFIG_PGO_USE 00:11:10.892 #define SPDK_CONFIG_PREFIX /usr/local 00:11:10.892 #undef SPDK_CONFIG_RAID5F 00:11:10.892 #undef SPDK_CONFIG_RBD 00:11:10.892 #define SPDK_CONFIG_RDMA 1 00:11:10.892 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:10.892 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:10.892 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:10.892 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:10.892 #define SPDK_CONFIG_SHARED 1 00:11:10.892 #undef SPDK_CONFIG_SMA 00:11:10.892 #define SPDK_CONFIG_TESTS 1 00:11:10.892 #undef SPDK_CONFIG_TSAN 00:11:10.892 #define SPDK_CONFIG_UBLK 1 00:11:10.892 #define SPDK_CONFIG_UBSAN 1 00:11:10.892 #undef SPDK_CONFIG_UNIT_TESTS 00:11:10.892 #undef SPDK_CONFIG_URING 00:11:10.892 #define SPDK_CONFIG_URING_PATH 00:11:10.892 #undef SPDK_CONFIG_URING_ZNS 00:11:10.892 #undef SPDK_CONFIG_USDT 00:11:10.892 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:10.892 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:10.892 #define SPDK_CONFIG_VFIO_USER 1 00:11:10.892 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:10.892 #define SPDK_CONFIG_VHOST 1 00:11:10.892 #define SPDK_CONFIG_VIRTIO 1 00:11:10.892 #undef SPDK_CONFIG_VTUNE 00:11:10.892 #define SPDK_CONFIG_VTUNE_DIR 00:11:10.892 #define SPDK_CONFIG_WERROR 1 00:11:10.892 #define SPDK_CONFIG_WPDK_DIR 00:11:10.892 #undef SPDK_CONFIG_XNVME 00:11:10.892 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.892 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:10.893 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:10.894 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:10.895 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1263689 ]] 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1263689 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.zVEUvL 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.zVEUvL/tests/target /tmp/spdk.zVEUvL 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:10.896 08:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189163974656 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6799986688 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97980542976 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1437696 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.896 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:10.896 * Looking for test storage... 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189163974656 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9014579200 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:10.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.897 --rc genhtml_branch_coverage=1 00:11:10.897 --rc genhtml_function_coverage=1 00:11:10.897 --rc genhtml_legend=1 00:11:10.897 --rc geninfo_all_blocks=1 00:11:10.897 --rc geninfo_unexecuted_blocks=1 00:11:10.897 00:11:10.897 ' 00:11:10.897 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:10.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.897 --rc genhtml_branch_coverage=1 00:11:10.897 --rc genhtml_function_coverage=1 00:11:10.898 --rc genhtml_legend=1 00:11:10.898 --rc geninfo_all_blocks=1 00:11:10.898 --rc geninfo_unexecuted_blocks=1 00:11:10.898 00:11:10.898 ' 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:10.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.898 --rc genhtml_branch_coverage=1 00:11:10.898 --rc genhtml_function_coverage=1 00:11:10.898 --rc genhtml_legend=1 00:11:10.898 --rc geninfo_all_blocks=1 00:11:10.898 --rc geninfo_unexecuted_blocks=1 00:11:10.898 00:11:10.898 ' 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:10.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.898 --rc genhtml_branch_coverage=1 00:11:10.898 --rc genhtml_function_coverage=1 00:11:10.898 --rc genhtml_legend=1 00:11:10.898 --rc geninfo_all_blocks=1 00:11:10.898 --rc geninfo_unexecuted_blocks=1 00:11:10.898 00:11:10.898 ' 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:10.898 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:10.899 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:10.899 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:10.899 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.899 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:10.899 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:10.899 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:10.899 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.899 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.899 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.158 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.158 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.158 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.158 08:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:16.494 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:16.494 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:16.494 Found net devices under 0000:86:00.0: cvl_0_0 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:16.494 Found net devices under 0000:86:00.1: cvl_0_1 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:16.494 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:16.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:11:16.494 00:11:16.495 --- 10.0.0.2 ping statistics --- 00:11:16.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.495 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:11:16.495 00:11:16.495 --- 10.0.0.1 ping statistics --- 00:11:16.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.495 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.495 ************************************ 00:11:16.495 START TEST nvmf_filesystem_no_in_capsule 00:11:16.495 ************************************ 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1266756 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1266756 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1266756 ']' 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.495 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.754 [2024-11-28 08:09:58.765081] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:11:16.754 [2024-11-28 08:09:58.765126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.754 [2024-11-28 08:09:58.833362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.754 [2024-11-28 08:09:58.874367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.754 [2024-11-28 08:09:58.874409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.754 [2024-11-28 08:09:58.874416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.754 [2024-11-28 08:09:58.874423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.754 [2024-11-28 08:09:58.874430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.754 [2024-11-28 08:09:58.875839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.754 [2024-11-28 08:09:58.875938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.754 [2024-11-28 08:09:58.876039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.754 [2024-11-28 08:09:58.876041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.754 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.754 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:16.754 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.754 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.754 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.754 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.754 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:16.754 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:16.754 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.754 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.013 [2024-11-28 08:09:59.022881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.013 Malloc1 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.013 [2024-11-28 08:09:59.190241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.013 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:17.013 { 00:11:17.013 "name": "Malloc1", 00:11:17.013 "aliases": [ 00:11:17.013 "d7f3daa4-89b2-4219-a266-e48f269be6f1" 00:11:17.013 ], 00:11:17.013 "product_name": "Malloc disk", 00:11:17.013 "block_size": 512, 00:11:17.013 "num_blocks": 1048576, 00:11:17.013 "uuid": "d7f3daa4-89b2-4219-a266-e48f269be6f1", 00:11:17.013 "assigned_rate_limits": { 00:11:17.013 "rw_ios_per_sec": 0, 00:11:17.013 "rw_mbytes_per_sec": 0, 00:11:17.013 "r_mbytes_per_sec": 0, 00:11:17.013 "w_mbytes_per_sec": 0 00:11:17.013 }, 00:11:17.013 "claimed": true, 00:11:17.013 "claim_type": "exclusive_write", 00:11:17.013 "zoned": false, 00:11:17.013 "supported_io_types": { 00:11:17.013 "read": true, 00:11:17.014 "write": true, 00:11:17.014 "unmap": true, 00:11:17.014 "flush": true, 00:11:17.014 "reset": true, 00:11:17.014 "nvme_admin": false, 00:11:17.014 "nvme_io": false, 00:11:17.014 "nvme_io_md": false, 00:11:17.014 "write_zeroes": true, 00:11:17.014 "zcopy": true, 00:11:17.014 "get_zone_info": false, 00:11:17.014 "zone_management": false, 00:11:17.014 "zone_append": false, 00:11:17.014 "compare": false, 00:11:17.014 "compare_and_write": false, 00:11:17.014 "abort": true, 00:11:17.014 "seek_hole": false, 00:11:17.014 "seek_data": false, 00:11:17.014 "copy": true, 00:11:17.014 "nvme_iov_md": false 00:11:17.014 }, 00:11:17.014 "memory_domains": [ 00:11:17.014 { 00:11:17.014 "dma_device_id": "system", 00:11:17.014 "dma_device_type": 1 00:11:17.014 }, 00:11:17.014 { 00:11:17.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.014 "dma_device_type": 2 00:11:17.014 } 00:11:17.014 ], 00:11:17.014 "driver_specific": {} 00:11:17.014 } 00:11:17.014 ]' 00:11:17.014 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:17.014 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:17.014 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:17.272 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:17.272 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:17.272 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:17.272 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:17.272 08:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.643 08:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.644 08:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:18.644 08:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.644 08:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:18.644 08:10:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:20.541 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:20.800 08:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:21.365 08:10:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.736 ************************************ 00:11:22.736 START TEST filesystem_ext4 00:11:22.736 ************************************ 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:22.736 08:10:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:22.736 mke2fs 1.47.0 (5-Feb-2023) 00:11:22.736 Discarding device blocks: 0/522240 done 00:11:22.736 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:22.736 Filesystem UUID: 7346ce81-e92d-461a-8dfb-dacaec91ab17 00:11:22.736 Superblock backups stored on blocks: 00:11:22.736 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:22.736 00:11:22.736 Allocating group tables: 0/64 done 00:11:22.736 Writing inode tables: 0/64 done 00:11:23.299 Creating journal (8192 blocks): done 00:11:25.495 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:11:25.495 00:11:25.495 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:25.495 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1266756 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.046 00:11:32.046 real 0m9.072s 00:11:32.046 user 0m0.026s 00:11:32.046 sys 0m0.079s 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:32.046 ************************************ 00:11:32.046 END TEST filesystem_ext4 00:11:32.046 ************************************ 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.046 ************************************ 00:11:32.046 START TEST filesystem_btrfs 00:11:32.046 ************************************ 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:32.046 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:32.046 btrfs-progs v6.8.1 00:11:32.046 See https://btrfs.readthedocs.io for more information. 00:11:32.046 00:11:32.046 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:32.046 NOTE: several default settings have changed in version 5.15, please make sure 00:11:32.046 this does not affect your deployments: 00:11:32.046 - DUP for metadata (-m dup) 00:11:32.046 - enabled no-holes (-O no-holes) 00:11:32.046 - enabled free-space-tree (-R free-space-tree) 00:11:32.046 00:11:32.046 Label: (null) 00:11:32.046 UUID: 27dd38e1-4ac1-4bee-b2a5-d2c671248bb6 00:11:32.046 Node size: 16384 00:11:32.046 Sector size: 4096 (CPU page size: 4096) 00:11:32.046 Filesystem size: 510.00MiB 00:11:32.046 Block group profiles: 00:11:32.046 Data: single 8.00MiB 00:11:32.046 Metadata: DUP 32.00MiB 00:11:32.047 System: DUP 8.00MiB 00:11:32.047 SSD detected: yes 00:11:32.047 Zoned device: no 00:11:32.047 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:32.047 Checksum: crc32c 00:11:32.047 Number of devices: 1 00:11:32.047 Devices: 00:11:32.047 ID SIZE PATH 00:11:32.047 1 510.00MiB /dev/nvme0n1p1 00:11:32.047 00:11:32.047 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:32.047 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1266756 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.613 00:11:32.613 real 0m1.115s 00:11:32.613 user 0m0.034s 00:11:32.613 sys 0m0.106s 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.613 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.613 ************************************ 00:11:32.613 END TEST filesystem_btrfs 00:11:32.613 ************************************ 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.871 ************************************ 00:11:32.871 START TEST filesystem_xfs 00:11:32.871 ************************************ 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:32.871 08:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:32.871 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:32.871 = sectsz=512 attr=2, projid32bit=1 00:11:32.871 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:32.871 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:32.871 data = bsize=4096 blocks=130560, imaxpct=25 00:11:32.871 = sunit=0 swidth=0 blks 00:11:32.871 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:32.871 log =internal log bsize=4096 blocks=16384, version=2 00:11:32.871 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:32.871 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:33.803 Discarding blocks...Done. 00:11:33.803 08:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:33.803 08:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:36.025 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:36.025 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:36.025 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:36.025 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:36.025 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:36.025 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:36.296 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1266756 00:11:36.296 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:36.296 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:36.296 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:36.296 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:36.296 00:11:36.296 real 0m3.378s 00:11:36.296 user 0m0.025s 00:11:36.296 sys 0m0.075s 00:11:36.296 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.296 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:36.296 ************************************ 00:11:36.296 END TEST filesystem_xfs 00:11:36.296 ************************************ 00:11:36.296 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1266756 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1266756 ']' 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1266756 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.563 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1266756 00:11:36.833 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.833 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.833 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1266756' 00:11:36.833 killing process with pid 1266756 00:11:36.833 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1266756 00:11:36.833 08:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1266756 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:37.129 00:11:37.129 real 0m20.491s 00:11:37.129 user 1m20.713s 00:11:37.129 sys 0m1.510s 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.129 ************************************ 00:11:37.129 END TEST nvmf_filesystem_no_in_capsule 00:11:37.129 ************************************ 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.129 ************************************ 00:11:37.129 START TEST nvmf_filesystem_in_capsule 00:11:37.129 ************************************ 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1270432 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1270432 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1270432 ']' 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.129 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.129 [2024-11-28 08:10:19.316437] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:11:37.130 [2024-11-28 08:10:19.316477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.130 [2024-11-28 08:10:19.383773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.422 [2024-11-28 08:10:19.427910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.422 [2024-11-28 08:10:19.427945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.422 [2024-11-28 08:10:19.427956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.422 [2024-11-28 08:10:19.427962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.422 [2024-11-28 08:10:19.427967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.422 [2024-11-28 08:10:19.429402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.422 [2024-11-28 08:10:19.429496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.422 [2024-11-28 08:10:19.429586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.422 [2024-11-28 08:10:19.429588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.422 [2024-11-28 08:10:19.563397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.422 Malloc1 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.422 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.700 [2024-11-28 08:10:19.718143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:37.700 { 00:11:37.700 "name": "Malloc1", 00:11:37.700 "aliases": [ 00:11:37.700 "d7365060-2df6-4d29-8854-b2dae5d6fd41" 00:11:37.700 ], 00:11:37.700 "product_name": "Malloc disk", 00:11:37.700 "block_size": 512, 00:11:37.700 "num_blocks": 1048576, 00:11:37.700 "uuid": "d7365060-2df6-4d29-8854-b2dae5d6fd41", 00:11:37.700 "assigned_rate_limits": { 00:11:37.700 "rw_ios_per_sec": 0, 00:11:37.700 "rw_mbytes_per_sec": 0, 00:11:37.700 "r_mbytes_per_sec": 0, 00:11:37.700 "w_mbytes_per_sec": 0 00:11:37.700 }, 00:11:37.700 "claimed": true, 00:11:37.700 "claim_type": "exclusive_write", 00:11:37.700 "zoned": false, 00:11:37.700 "supported_io_types": { 00:11:37.700 "read": true, 00:11:37.700 "write": true, 00:11:37.700 "unmap": true, 00:11:37.700 "flush": true, 00:11:37.700 "reset": true, 00:11:37.700 "nvme_admin": false, 00:11:37.700 "nvme_io": false, 00:11:37.700 "nvme_io_md": false, 00:11:37.700 "write_zeroes": true, 00:11:37.700 "zcopy": true, 00:11:37.700 "get_zone_info": false, 00:11:37.700 "zone_management": false, 00:11:37.700 "zone_append": false, 00:11:37.700 "compare": false, 00:11:37.700 "compare_and_write": false, 00:11:37.700 "abort": true, 00:11:37.700 "seek_hole": false, 00:11:37.700 "seek_data": false, 00:11:37.700 "copy": true, 00:11:37.700 "nvme_iov_md": false 00:11:37.700 }, 00:11:37.700 "memory_domains": [ 00:11:37.700 { 00:11:37.700 "dma_device_id": "system", 00:11:37.700 "dma_device_type": 1 00:11:37.700 }, 00:11:37.700 { 00:11:37.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.700 "dma_device_type": 2 00:11:37.700 } 00:11:37.700 ], 00:11:37.700 "driver_specific": {} 00:11:37.700 } 00:11:37.700 ]' 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:37.700 08:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:38.706 08:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:38.706 08:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:38.706 08:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.706 08:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:38.706 08:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:40.740 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:41.023 08:10:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:41.647 08:10:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:42.714 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:42.714 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:42.714 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:42.714 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.714 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.714 ************************************ 00:11:42.714 START TEST filesystem_in_capsule_ext4 00:11:42.714 ************************************ 00:11:42.714 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:42.714 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:42.714 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:42.714 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:42.714 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:42.715 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:42.715 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:42.715 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:42.715 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:42.715 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:42.715 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:42.715 mke2fs 1.47.0 (5-Feb-2023) 00:11:42.715 Discarding device blocks: 0/522240 done 00:11:42.715 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:42.715 Filesystem UUID: 3e379da4-8ac3-4a15-ac19-6aa15290ce72 00:11:42.715 Superblock backups stored on blocks: 00:11:42.715 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:42.715 00:11:42.715 Allocating group tables: 0/64 done 00:11:42.715 Writing inode tables: 0/64 done 00:11:42.715 Creating journal (8192 blocks): done 00:11:45.080 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:11:45.080 00:11:45.080 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:45.080 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1270432 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.650 00:11:51.650 real 0m8.269s 00:11:51.650 user 0m0.031s 00:11:51.650 sys 0m0.071s 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.650 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:51.650 ************************************ 00:11:51.650 END TEST filesystem_in_capsule_ext4 00:11:51.651 ************************************ 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.651 ************************************ 00:11:51.651 START TEST filesystem_in_capsule_btrfs 00:11:51.651 ************************************ 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:51.651 08:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:51.651 btrfs-progs v6.8.1 00:11:51.651 See https://btrfs.readthedocs.io for more information. 00:11:51.651 00:11:51.651 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:51.651 NOTE: several default settings have changed in version 5.15, please make sure 00:11:51.651 this does not affect your deployments: 00:11:51.651 - DUP for metadata (-m dup) 00:11:51.651 - enabled no-holes (-O no-holes) 00:11:51.651 - enabled free-space-tree (-R free-space-tree) 00:11:51.651 00:11:51.651 Label: (null) 00:11:51.651 UUID: b1189d37-8d5d-4d72-93c7-a193f33dcc50 00:11:51.651 Node size: 16384 00:11:51.651 Sector size: 4096 (CPU page size: 4096) 00:11:51.651 Filesystem size: 510.00MiB 00:11:51.651 Block group profiles: 00:11:51.651 Data: single 8.00MiB 00:11:51.651 Metadata: DUP 32.00MiB 00:11:51.651 System: DUP 8.00MiB 00:11:51.651 SSD detected: yes 00:11:51.651 Zoned device: no 00:11:51.651 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:51.651 Checksum: crc32c 00:11:51.651 Number of devices: 1 00:11:51.651 Devices: 00:11:51.651 ID SIZE PATH 00:11:51.651 1 510.00MiB /dev/nvme0n1p1 00:11:51.651 00:11:51.651 08:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:51.651 08:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1270432 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:52.220 00:11:52.220 real 0m1.278s 00:11:52.220 user 0m0.025s 00:11:52.220 sys 0m0.116s 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:52.220 ************************************ 00:11:52.220 END TEST filesystem_in_capsule_btrfs 00:11:52.220 ************************************ 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.220 ************************************ 00:11:52.220 START TEST filesystem_in_capsule_xfs 00:11:52.220 ************************************ 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:52.220 08:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:52.220 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:52.220 = sectsz=512 attr=2, projid32bit=1 00:11:52.220 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:52.220 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:52.220 data = bsize=4096 blocks=130560, imaxpct=25 00:11:52.220 = sunit=0 swidth=0 blks 00:11:52.220 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:52.220 log =internal log bsize=4096 blocks=16384, version=2 00:11:52.220 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:52.220 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:53.599 Discarding blocks...Done. 00:11:53.599 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:53.599 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.137 08:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1270432 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.137 00:11:56.137 real 0m3.786s 00:11:56.137 user 0m0.030s 00:11:56.137 sys 0m0.067s 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:56.137 ************************************ 00:11:56.137 END TEST filesystem_in_capsule_xfs 00:11:56.137 ************************************ 00:11:56.137 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.397 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1270432 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1270432 ']' 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1270432 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1270432 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1270432' 00:11:56.398 killing process with pid 1270432 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1270432 00:11:56.398 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1270432 00:11:56.968 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:56.968 00:11:56.968 real 0m19.698s 00:11:56.968 user 1m17.681s 00:11:56.968 sys 0m1.412s 00:11:56.968 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.968 08:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.968 ************************************ 00:11:56.968 END TEST nvmf_filesystem_in_capsule 00:11:56.968 ************************************ 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.968 rmmod nvme_tcp 00:11:56.968 rmmod nvme_fabrics 00:11:56.968 rmmod nvme_keyring 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.968 08:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.890 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:58.890 00:11:58.890 real 0m48.433s 00:11:58.890 user 2m40.327s 00:11:58.890 sys 0m7.285s 00:11:58.890 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.890 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.890 ************************************ 00:11:58.890 END TEST nvmf_filesystem 00:11:58.890 ************************************ 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.150 ************************************ 00:11:59.150 START TEST nvmf_target_discovery 00:11:59.150 ************************************ 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:59.150 * Looking for test storage... 00:11:59.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.150 --rc genhtml_branch_coverage=1 00:11:59.150 --rc genhtml_function_coverage=1 00:11:59.150 --rc genhtml_legend=1 00:11:59.150 --rc geninfo_all_blocks=1 00:11:59.150 --rc geninfo_unexecuted_blocks=1 00:11:59.150 00:11:59.150 ' 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.150 --rc genhtml_branch_coverage=1 00:11:59.150 --rc genhtml_function_coverage=1 00:11:59.150 --rc genhtml_legend=1 00:11:59.150 --rc geninfo_all_blocks=1 00:11:59.150 --rc geninfo_unexecuted_blocks=1 00:11:59.150 00:11:59.150 ' 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.150 --rc genhtml_branch_coverage=1 00:11:59.150 --rc genhtml_function_coverage=1 00:11:59.150 --rc genhtml_legend=1 00:11:59.150 --rc geninfo_all_blocks=1 00:11:59.150 --rc geninfo_unexecuted_blocks=1 00:11:59.150 00:11:59.150 ' 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.150 --rc genhtml_branch_coverage=1 00:11:59.150 --rc genhtml_function_coverage=1 00:11:59.150 --rc genhtml_legend=1 00:11:59.150 --rc geninfo_all_blocks=1 00:11:59.150 --rc geninfo_unexecuted_blocks=1 00:11:59.150 00:11:59.150 ' 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:59.150 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.151 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.410 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.410 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.410 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.410 08:10:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.684 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.684 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:04.684 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:04.684 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:04.684 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:04.684 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:04.684 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:04.684 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:04.684 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:04.685 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:04.685 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:04.685 Found net devices under 0000:86:00.0: cvl_0_0 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:04.685 Found net devices under 0000:86:00.1: cvl_0_1 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:04.685 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.945 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.945 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.946 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:04.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:12:04.946 00:12:04.946 --- 10.0.0.2 ping statistics --- 00:12:04.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.946 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:12:04.946 00:12:04.946 --- 10.0.0.1 ping statistics --- 00:12:04.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.946 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1277425 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1277425 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1277425 ']' 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.946 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.946 [2024-11-28 08:10:47.113541] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:12:04.946 [2024-11-28 08:10:47.113584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.946 [2024-11-28 08:10:47.180343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.206 [2024-11-28 08:10:47.223271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.206 [2024-11-28 08:10:47.223307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.206 [2024-11-28 08:10:47.223315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.206 [2024-11-28 08:10:47.223321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.206 [2024-11-28 08:10:47.223326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.206 [2024-11-28 08:10:47.224927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.206 [2024-11-28 08:10:47.225044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.206 [2024-11-28 08:10:47.225063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.206 [2024-11-28 08:10:47.225064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.206 [2024-11-28 08:10:47.375797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.206 Null1 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.206 [2024-11-28 08:10:47.430116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.206 Null2 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:05.206 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.207 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.207 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.207 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:05.207 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.207 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.207 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.207 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.207 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:05.207 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.207 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.466 Null3 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.466 Null4 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.466 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:05.726 00:12:05.726 Discovery Log Number of Records 6, Generation counter 6 00:12:05.726 =====Discovery Log Entry 0====== 00:12:05.726 trtype: tcp 00:12:05.726 adrfam: ipv4 00:12:05.726 subtype: current discovery subsystem 00:12:05.726 treq: not required 00:12:05.726 portid: 0 00:12:05.726 trsvcid: 4420 00:12:05.726 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.726 traddr: 10.0.0.2 00:12:05.726 eflags: explicit discovery connections, duplicate discovery information 00:12:05.726 sectype: none 00:12:05.726 =====Discovery Log Entry 1====== 00:12:05.726 trtype: tcp 00:12:05.726 adrfam: ipv4 00:12:05.726 subtype: nvme subsystem 00:12:05.726 treq: not required 00:12:05.726 portid: 0 00:12:05.726 trsvcid: 4420 00:12:05.726 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:05.726 traddr: 10.0.0.2 00:12:05.726 eflags: none 00:12:05.726 sectype: none 00:12:05.726 =====Discovery Log Entry 2====== 00:12:05.726 trtype: tcp 00:12:05.726 adrfam: ipv4 00:12:05.726 subtype: nvme subsystem 00:12:05.726 treq: not required 00:12:05.726 portid: 0 00:12:05.726 trsvcid: 4420 00:12:05.726 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:05.726 traddr: 10.0.0.2 00:12:05.726 eflags: none 00:12:05.726 sectype: none 00:12:05.726 =====Discovery Log Entry 3====== 00:12:05.726 trtype: tcp 00:12:05.726 adrfam: ipv4 00:12:05.726 subtype: nvme subsystem 00:12:05.726 treq: not required 00:12:05.726 portid: 0 00:12:05.726 trsvcid: 4420 00:12:05.726 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:05.726 traddr: 10.0.0.2 00:12:05.726 eflags: none 00:12:05.726 sectype: none 00:12:05.726 =====Discovery Log Entry 4====== 00:12:05.726 trtype: tcp 00:12:05.726 adrfam: ipv4 00:12:05.726 subtype: nvme subsystem 00:12:05.726 treq: not required 00:12:05.726 portid: 0 00:12:05.726 trsvcid: 4420 00:12:05.726 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:05.726 traddr: 10.0.0.2 00:12:05.726 eflags: none 00:12:05.726 sectype: none 00:12:05.726 =====Discovery Log Entry 5====== 00:12:05.726 trtype: tcp 00:12:05.726 adrfam: ipv4 00:12:05.726 subtype: discovery subsystem referral 00:12:05.726 treq: not required 00:12:05.726 portid: 0 00:12:05.726 trsvcid: 4430 00:12:05.726 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.726 traddr: 10.0.0.2 00:12:05.726 eflags: none 00:12:05.726 sectype: none 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:05.726 Perform nvmf subsystem discovery via RPC 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 [ 00:12:05.726 { 00:12:05.726 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:05.726 "subtype": "Discovery", 00:12:05.726 "listen_addresses": [ 00:12:05.726 { 00:12:05.726 "trtype": "TCP", 00:12:05.726 "adrfam": "IPv4", 00:12:05.726 "traddr": "10.0.0.2", 00:12:05.726 "trsvcid": "4420" 00:12:05.726 } 00:12:05.726 ], 00:12:05.726 "allow_any_host": true, 00:12:05.726 "hosts": [] 00:12:05.726 }, 00:12:05.726 { 00:12:05.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.726 "subtype": "NVMe", 00:12:05.726 "listen_addresses": [ 00:12:05.726 { 00:12:05.726 "trtype": "TCP", 00:12:05.726 "adrfam": "IPv4", 00:12:05.726 "traddr": "10.0.0.2", 00:12:05.726 "trsvcid": "4420" 00:12:05.726 } 00:12:05.726 ], 00:12:05.726 "allow_any_host": true, 00:12:05.726 "hosts": [], 00:12:05.726 "serial_number": "SPDK00000000000001", 00:12:05.726 "model_number": "SPDK bdev Controller", 00:12:05.726 "max_namespaces": 32, 00:12:05.726 "min_cntlid": 1, 00:12:05.726 "max_cntlid": 65519, 00:12:05.726 "namespaces": [ 00:12:05.726 { 00:12:05.726 "nsid": 1, 00:12:05.726 "bdev_name": "Null1", 00:12:05.726 "name": "Null1", 00:12:05.726 "nguid": "A79F117528434B51BADA92B77D193361", 00:12:05.726 "uuid": "a79f1175-2843-4b51-bada-92b77d193361" 00:12:05.726 } 00:12:05.726 ] 00:12:05.726 }, 00:12:05.726 { 00:12:05.726 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:05.726 "subtype": "NVMe", 00:12:05.726 "listen_addresses": [ 00:12:05.726 { 00:12:05.726 "trtype": "TCP", 00:12:05.726 "adrfam": "IPv4", 00:12:05.726 "traddr": "10.0.0.2", 00:12:05.726 "trsvcid": "4420" 00:12:05.726 } 00:12:05.726 ], 00:12:05.726 "allow_any_host": true, 00:12:05.726 "hosts": [], 00:12:05.726 "serial_number": "SPDK00000000000002", 00:12:05.726 "model_number": "SPDK bdev Controller", 00:12:05.726 "max_namespaces": 32, 00:12:05.726 "min_cntlid": 1, 00:12:05.726 "max_cntlid": 65519, 00:12:05.726 "namespaces": [ 00:12:05.726 { 00:12:05.726 "nsid": 1, 00:12:05.726 "bdev_name": "Null2", 00:12:05.726 "name": "Null2", 00:12:05.726 "nguid": "B0182D18E6A0463183CC80B6FBF82F25", 00:12:05.726 "uuid": "b0182d18-e6a0-4631-83cc-80b6fbf82f25" 00:12:05.726 } 00:12:05.726 ] 00:12:05.726 }, 00:12:05.726 { 00:12:05.726 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:05.726 "subtype": "NVMe", 00:12:05.726 "listen_addresses": [ 00:12:05.726 { 00:12:05.726 "trtype": "TCP", 00:12:05.726 "adrfam": "IPv4", 00:12:05.726 "traddr": "10.0.0.2", 00:12:05.726 "trsvcid": "4420" 00:12:05.726 } 00:12:05.726 ], 00:12:05.726 "allow_any_host": true, 00:12:05.726 "hosts": [], 00:12:05.726 "serial_number": "SPDK00000000000003", 00:12:05.726 "model_number": "SPDK bdev Controller", 00:12:05.726 "max_namespaces": 32, 00:12:05.726 "min_cntlid": 1, 00:12:05.726 "max_cntlid": 65519, 00:12:05.726 "namespaces": [ 00:12:05.726 { 00:12:05.726 "nsid": 1, 00:12:05.726 "bdev_name": "Null3", 00:12:05.726 "name": "Null3", 00:12:05.726 "nguid": "D4BB5384C9314C86991FA26C18664565", 00:12:05.726 "uuid": "d4bb5384-c931-4c86-991f-a26c18664565" 00:12:05.726 } 00:12:05.726 ] 00:12:05.726 }, 00:12:05.726 { 00:12:05.726 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:05.726 "subtype": "NVMe", 00:12:05.726 "listen_addresses": [ 00:12:05.726 { 00:12:05.726 "trtype": "TCP", 00:12:05.726 "adrfam": "IPv4", 00:12:05.726 "traddr": "10.0.0.2", 00:12:05.726 "trsvcid": "4420" 00:12:05.726 } 00:12:05.726 ], 00:12:05.726 "allow_any_host": true, 00:12:05.726 "hosts": [], 00:12:05.726 "serial_number": "SPDK00000000000004", 00:12:05.726 "model_number": "SPDK bdev Controller", 00:12:05.726 "max_namespaces": 32, 00:12:05.726 "min_cntlid": 1, 00:12:05.726 "max_cntlid": 65519, 00:12:05.726 "namespaces": [ 00:12:05.726 { 00:12:05.726 "nsid": 1, 00:12:05.726 "bdev_name": "Null4", 00:12:05.726 "name": "Null4", 00:12:05.726 "nguid": "B53AC4E00DF94C34A5479901D5421DAF", 00:12:05.726 "uuid": "b53ac4e0-0df9-4c34-a547-9901d5421daf" 00:12:05.726 } 00:12:05.726 ] 00:12:05.726 } 00:12:05.726 ] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.726 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.726 rmmod nvme_tcp 00:12:05.726 rmmod nvme_fabrics 00:12:05.726 rmmod nvme_keyring 00:12:05.727 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.727 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:05.727 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:05.727 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1277425 ']' 00:12:05.727 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1277425 00:12:05.727 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1277425 ']' 00:12:05.727 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1277425 00:12:05.727 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:05.727 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.727 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1277425 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1277425' 00:12:05.987 killing process with pid 1277425 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1277425 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1277425 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.987 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.527 00:12:08.527 real 0m9.050s 00:12:08.527 user 0m5.641s 00:12:08.527 sys 0m4.564s 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.527 ************************************ 00:12:08.527 END TEST nvmf_target_discovery 00:12:08.527 ************************************ 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.527 ************************************ 00:12:08.527 START TEST nvmf_referrals 00:12:08.527 ************************************ 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:08.527 * Looking for test storage... 00:12:08.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:08.527 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.528 --rc genhtml_branch_coverage=1 00:12:08.528 --rc genhtml_function_coverage=1 00:12:08.528 --rc genhtml_legend=1 00:12:08.528 --rc geninfo_all_blocks=1 00:12:08.528 --rc geninfo_unexecuted_blocks=1 00:12:08.528 00:12:08.528 ' 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.528 --rc genhtml_branch_coverage=1 00:12:08.528 --rc genhtml_function_coverage=1 00:12:08.528 --rc genhtml_legend=1 00:12:08.528 --rc geninfo_all_blocks=1 00:12:08.528 --rc geninfo_unexecuted_blocks=1 00:12:08.528 00:12:08.528 ' 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.528 --rc genhtml_branch_coverage=1 00:12:08.528 --rc genhtml_function_coverage=1 00:12:08.528 --rc genhtml_legend=1 00:12:08.528 --rc geninfo_all_blocks=1 00:12:08.528 --rc geninfo_unexecuted_blocks=1 00:12:08.528 00:12:08.528 ' 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.528 --rc genhtml_branch_coverage=1 00:12:08.528 --rc genhtml_function_coverage=1 00:12:08.528 --rc genhtml_legend=1 00:12:08.528 --rc geninfo_all_blocks=1 00:12:08.528 --rc geninfo_unexecuted_blocks=1 00:12:08.528 00:12:08.528 ' 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.528 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.529 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.529 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.529 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.529 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.529 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.529 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.809 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.809 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:13.809 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:13.809 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:13.809 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:13.809 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:13.810 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:13.810 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:13.810 Found net devices under 0000:86:00.0: cvl_0_0 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:13.810 Found net devices under 0000:86:00.1: cvl_0_1 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:13.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:12:13.810 00:12:13.810 --- 10.0.0.2 ping statistics --- 00:12:13.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.810 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:13.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:12:13.810 00:12:13.810 --- 10.0.0.1 ping statistics --- 00:12:13.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.810 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:13.810 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1280982 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1280982 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1280982 ']' 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.811 [2024-11-28 08:10:55.779372] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:12:13.811 [2024-11-28 08:10:55.779418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.811 [2024-11-28 08:10:55.844958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.811 [2024-11-28 08:10:55.887972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.811 [2024-11-28 08:10:55.888012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.811 [2024-11-28 08:10:55.888019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.811 [2024-11-28 08:10:55.888025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.811 [2024-11-28 08:10:55.888031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.811 [2024-11-28 08:10:55.889541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.811 [2024-11-28 08:10:55.889637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.811 [2024-11-28 08:10:55.889730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.811 [2024-11-28 08:10:55.889732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:13.811 08:10:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.811 [2024-11-28 08:10:56.027693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.811 [2024-11-28 08:10:56.059110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.811 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.071 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:14.330 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:14.590 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:14.849 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:14.849 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:14.849 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:14.849 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:14.849 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:14.849 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:14.849 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:14.849 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:14.849 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:14.849 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:14.849 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:14.849 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:14.849 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:15.107 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:15.107 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.108 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.366 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.625 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.626 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.626 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.626 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.885 rmmod nvme_tcp 00:12:15.885 rmmod nvme_fabrics 00:12:15.885 rmmod nvme_keyring 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1280982 ']' 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1280982 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1280982 ']' 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1280982 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1280982 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1280982' 00:12:15.885 killing process with pid 1280982 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1280982 00:12:15.885 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1280982 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.145 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:18.685 00:12:18.685 real 0m10.042s 00:12:18.685 user 0m11.929s 00:12:18.685 sys 0m4.628s 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.685 ************************************ 00:12:18.685 END TEST nvmf_referrals 00:12:18.685 ************************************ 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.685 ************************************ 00:12:18.685 START TEST nvmf_connect_disconnect 00:12:18.685 ************************************ 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:18.685 * Looking for test storage... 00:12:18.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:18.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.685 --rc genhtml_branch_coverage=1 00:12:18.685 --rc genhtml_function_coverage=1 00:12:18.685 --rc genhtml_legend=1 00:12:18.685 --rc geninfo_all_blocks=1 00:12:18.685 --rc geninfo_unexecuted_blocks=1 00:12:18.685 00:12:18.685 ' 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:18.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.685 --rc genhtml_branch_coverage=1 00:12:18.685 --rc genhtml_function_coverage=1 00:12:18.685 --rc genhtml_legend=1 00:12:18.685 --rc geninfo_all_blocks=1 00:12:18.685 --rc geninfo_unexecuted_blocks=1 00:12:18.685 00:12:18.685 ' 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:18.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.685 --rc genhtml_branch_coverage=1 00:12:18.685 --rc genhtml_function_coverage=1 00:12:18.685 --rc genhtml_legend=1 00:12:18.685 --rc geninfo_all_blocks=1 00:12:18.685 --rc geninfo_unexecuted_blocks=1 00:12:18.685 00:12:18.685 ' 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:18.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.685 --rc genhtml_branch_coverage=1 00:12:18.685 --rc genhtml_function_coverage=1 00:12:18.685 --rc genhtml_legend=1 00:12:18.685 --rc geninfo_all_blocks=1 00:12:18.685 --rc geninfo_unexecuted_blocks=1 00:12:18.685 00:12:18.685 ' 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.685 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:18.686 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:23.973 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:23.973 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:23.973 Found net devices under 0000:86:00.0: cvl_0_0 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:23.973 Found net devices under 0000:86:00.1: cvl_0_1 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:23.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:12:23.973 00:12:23.973 --- 10.0.0.2 ping statistics --- 00:12:23.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.973 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:12:23.973 00:12:23.973 --- 10.0.0.1 ping statistics --- 00:12:23.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.973 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:12:23.973 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1284940 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1284940 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1284940 ']' 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 [2024-11-28 08:11:05.704627] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:12:23.974 [2024-11-28 08:11:05.704676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.974 [2024-11-28 08:11:05.775671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.974 [2024-11-28 08:11:05.821592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.974 [2024-11-28 08:11:05.821632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.974 [2024-11-28 08:11:05.821640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.974 [2024-11-28 08:11:05.821646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.974 [2024-11-28 08:11:05.821651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.974 [2024-11-28 08:11:05.823213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.974 [2024-11-28 08:11:05.823314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.974 [2024-11-28 08:11:05.823388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.974 [2024-11-28 08:11:05.823391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 [2024-11-28 08:11:05.970142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.974 08:11:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 [2024-11-28 08:11:06.033762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:23.974 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:27.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.521 rmmod nvme_tcp 00:12:40.521 rmmod nvme_fabrics 00:12:40.521 rmmod nvme_keyring 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1284940 ']' 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1284940 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1284940 ']' 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1284940 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:40.521 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1284940 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1284940' 00:12:40.522 killing process with pid 1284940 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1284940 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1284940 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.522 08:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.430 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.430 00:12:42.430 real 0m24.160s 00:12:42.430 user 1m7.485s 00:12:42.430 sys 0m5.159s 00:12:42.430 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.430 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.430 ************************************ 00:12:42.430 END TEST nvmf_connect_disconnect 00:12:42.430 ************************************ 00:12:42.430 08:11:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:42.430 08:11:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.430 08:11:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.430 08:11:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.430 ************************************ 00:12:42.430 START TEST nvmf_multitarget 00:12:42.430 ************************************ 00:12:42.430 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:42.693 * Looking for test storage... 00:12:42.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.693 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:42.693 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:42.693 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:42.693 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:42.693 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.693 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.693 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.693 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.693 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.693 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:42.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.694 --rc genhtml_branch_coverage=1 00:12:42.694 --rc genhtml_function_coverage=1 00:12:42.694 --rc genhtml_legend=1 00:12:42.694 --rc geninfo_all_blocks=1 00:12:42.694 --rc geninfo_unexecuted_blocks=1 00:12:42.694 00:12:42.694 ' 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:42.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.694 --rc genhtml_branch_coverage=1 00:12:42.694 --rc genhtml_function_coverage=1 00:12:42.694 --rc genhtml_legend=1 00:12:42.694 --rc geninfo_all_blocks=1 00:12:42.694 --rc geninfo_unexecuted_blocks=1 00:12:42.694 00:12:42.694 ' 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:42.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.694 --rc genhtml_branch_coverage=1 00:12:42.694 --rc genhtml_function_coverage=1 00:12:42.694 --rc genhtml_legend=1 00:12:42.694 --rc geninfo_all_blocks=1 00:12:42.694 --rc geninfo_unexecuted_blocks=1 00:12:42.694 00:12:42.694 ' 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:42.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.694 --rc genhtml_branch_coverage=1 00:12:42.694 --rc genhtml_function_coverage=1 00:12:42.694 --rc genhtml_legend=1 00:12:42.694 --rc geninfo_all_blocks=1 00:12:42.694 --rc geninfo_unexecuted_blocks=1 00:12:42.694 00:12:42.694 ' 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.694 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.695 08:11:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.977 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:47.978 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:47.978 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:47.978 Found net devices under 0000:86:00.0: cvl_0_0 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:47.978 Found net devices under 0000:86:00.1: cvl_0_1 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.978 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:48.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:12:48.238 00:12:48.238 --- 10.0.0.2 ping statistics --- 00:12:48.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.238 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:12:48.238 00:12:48.238 --- 10.0.0.1 ping statistics --- 00:12:48.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.238 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1291218 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1291218 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1291218 ']' 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.238 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.238 [2024-11-28 08:11:30.471291] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:12:48.238 [2024-11-28 08:11:30.471340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.498 [2024-11-28 08:11:30.537473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.498 [2024-11-28 08:11:30.580534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.498 [2024-11-28 08:11:30.580571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.498 [2024-11-28 08:11:30.580578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.498 [2024-11-28 08:11:30.580585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.498 [2024-11-28 08:11:30.580590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.498 [2024-11-28 08:11:30.582175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.498 [2024-11-28 08:11:30.582274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.498 [2024-11-28 08:11:30.582359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.498 [2024-11-28 08:11:30.582362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.498 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.498 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:48.498 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:48.498 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:48.498 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.498 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.498 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:48.498 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:48.498 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:48.757 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:48.757 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:48.757 "nvmf_tgt_1" 00:12:48.757 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:49.016 "nvmf_tgt_2" 00:12:49.016 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.016 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:49.016 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:49.016 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:49.016 true 00:12:49.016 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:49.276 true 00:12:49.276 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.276 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:49.276 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:49.276 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:49.276 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:49.276 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.276 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:49.276 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.276 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:49.276 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.276 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.276 rmmod nvme_tcp 00:12:49.276 rmmod nvme_fabrics 00:12:49.276 rmmod nvme_keyring 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1291218 ']' 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1291218 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1291218 ']' 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1291218 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1291218 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1291218' 00:12:49.535 killing process with pid 1291218 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1291218 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1291218 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.535 08:11:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.076 00:12:52.076 real 0m9.205s 00:12:52.076 user 0m7.124s 00:12:52.076 sys 0m4.638s 00:12:52.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.076 ************************************ 00:12:52.076 END TEST nvmf_multitarget 00:12:52.076 ************************************ 00:12:52.076 08:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.076 08:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:52.076 08:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.076 08:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.076 ************************************ 00:12:52.076 START TEST nvmf_rpc 00:12:52.076 ************************************ 00:12:52.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.076 * Looking for test storage... 00:12:52.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:52.076 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:52.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.077 --rc genhtml_branch_coverage=1 00:12:52.077 --rc genhtml_function_coverage=1 00:12:52.077 --rc genhtml_legend=1 00:12:52.077 --rc geninfo_all_blocks=1 00:12:52.077 --rc geninfo_unexecuted_blocks=1 00:12:52.077 00:12:52.077 ' 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:52.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.077 --rc genhtml_branch_coverage=1 00:12:52.077 --rc genhtml_function_coverage=1 00:12:52.077 --rc genhtml_legend=1 00:12:52.077 --rc geninfo_all_blocks=1 00:12:52.077 --rc geninfo_unexecuted_blocks=1 00:12:52.077 00:12:52.077 ' 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:52.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.077 --rc genhtml_branch_coverage=1 00:12:52.077 --rc genhtml_function_coverage=1 00:12:52.077 --rc genhtml_legend=1 00:12:52.077 --rc geninfo_all_blocks=1 00:12:52.077 --rc geninfo_unexecuted_blocks=1 00:12:52.077 00:12:52.077 ' 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:52.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.077 --rc genhtml_branch_coverage=1 00:12:52.077 --rc genhtml_function_coverage=1 00:12:52.077 --rc genhtml_legend=1 00:12:52.077 --rc geninfo_all_blocks=1 00:12:52.077 --rc geninfo_unexecuted_blocks=1 00:12:52.077 00:12:52.077 ' 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:52.077 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:52.078 08:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:57.357 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:57.357 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:57.357 Found net devices under 0000:86:00.0: cvl_0_0 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.357 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:57.357 Found net devices under 0000:86:00.1: cvl_0_1 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:57.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:12:57.358 00:12:57.358 --- 10.0.0.2 ping statistics --- 00:12:57.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.358 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:12:57.358 00:12:57.358 --- 10.0.0.1 ping statistics --- 00:12:57.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.358 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1294998 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1294998 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1294998 ']' 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.358 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.618 [2024-11-28 08:11:39.653442] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:12:57.618 [2024-11-28 08:11:39.653488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.618 [2024-11-28 08:11:39.720275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.618 [2024-11-28 08:11:39.763245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.618 [2024-11-28 08:11:39.763284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.618 [2024-11-28 08:11:39.763294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.618 [2024-11-28 08:11:39.763300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.618 [2024-11-28 08:11:39.763305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.618 [2024-11-28 08:11:39.764907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.618 [2024-11-28 08:11:39.765005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.618 [2024-11-28 08:11:39.765027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.618 [2024-11-28 08:11:39.765029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.618 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.618 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:57.618 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.618 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.618 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:57.877 "tick_rate": 2300000000, 00:12:57.877 "poll_groups": [ 00:12:57.877 { 00:12:57.877 "name": "nvmf_tgt_poll_group_000", 00:12:57.877 "admin_qpairs": 0, 00:12:57.877 "io_qpairs": 0, 00:12:57.877 "current_admin_qpairs": 0, 00:12:57.877 "current_io_qpairs": 0, 00:12:57.877 "pending_bdev_io": 0, 00:12:57.877 "completed_nvme_io": 0, 00:12:57.877 "transports": [] 00:12:57.877 }, 00:12:57.877 { 00:12:57.877 "name": "nvmf_tgt_poll_group_001", 00:12:57.877 "admin_qpairs": 0, 00:12:57.877 "io_qpairs": 0, 00:12:57.877 "current_admin_qpairs": 0, 00:12:57.877 "current_io_qpairs": 0, 00:12:57.877 "pending_bdev_io": 0, 00:12:57.877 "completed_nvme_io": 0, 00:12:57.877 "transports": [] 00:12:57.877 }, 00:12:57.877 { 00:12:57.877 "name": "nvmf_tgt_poll_group_002", 00:12:57.877 "admin_qpairs": 0, 00:12:57.877 "io_qpairs": 0, 00:12:57.877 "current_admin_qpairs": 0, 00:12:57.877 "current_io_qpairs": 0, 00:12:57.877 "pending_bdev_io": 0, 00:12:57.877 "completed_nvme_io": 0, 00:12:57.877 "transports": [] 00:12:57.877 }, 00:12:57.877 { 00:12:57.877 "name": "nvmf_tgt_poll_group_003", 00:12:57.877 "admin_qpairs": 0, 00:12:57.877 "io_qpairs": 0, 00:12:57.877 "current_admin_qpairs": 0, 00:12:57.877 "current_io_qpairs": 0, 00:12:57.877 "pending_bdev_io": 0, 00:12:57.877 "completed_nvme_io": 0, 00:12:57.877 "transports": [] 00:12:57.877 } 00:12:57.877 ] 00:12:57.877 }' 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:57.877 08:11:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:57.877 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:57.877 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.877 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.877 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.877 [2024-11-28 08:11:40.008500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.877 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.877 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:57.878 "tick_rate": 2300000000, 00:12:57.878 "poll_groups": [ 00:12:57.878 { 00:12:57.878 "name": "nvmf_tgt_poll_group_000", 00:12:57.878 "admin_qpairs": 0, 00:12:57.878 "io_qpairs": 0, 00:12:57.878 "current_admin_qpairs": 0, 00:12:57.878 "current_io_qpairs": 0, 00:12:57.878 "pending_bdev_io": 0, 00:12:57.878 "completed_nvme_io": 0, 00:12:57.878 "transports": [ 00:12:57.878 { 00:12:57.878 "trtype": "TCP" 00:12:57.878 } 00:12:57.878 ] 00:12:57.878 }, 00:12:57.878 { 00:12:57.878 "name": "nvmf_tgt_poll_group_001", 00:12:57.878 "admin_qpairs": 0, 00:12:57.878 "io_qpairs": 0, 00:12:57.878 "current_admin_qpairs": 0, 00:12:57.878 "current_io_qpairs": 0, 00:12:57.878 "pending_bdev_io": 0, 00:12:57.878 "completed_nvme_io": 0, 00:12:57.878 "transports": [ 00:12:57.878 { 00:12:57.878 "trtype": "TCP" 00:12:57.878 } 00:12:57.878 ] 00:12:57.878 }, 00:12:57.878 { 00:12:57.878 "name": "nvmf_tgt_poll_group_002", 00:12:57.878 "admin_qpairs": 0, 00:12:57.878 "io_qpairs": 0, 00:12:57.878 "current_admin_qpairs": 0, 00:12:57.878 "current_io_qpairs": 0, 00:12:57.878 "pending_bdev_io": 0, 00:12:57.878 "completed_nvme_io": 0, 00:12:57.878 "transports": [ 00:12:57.878 { 00:12:57.878 "trtype": "TCP" 00:12:57.878 } 00:12:57.878 ] 00:12:57.878 }, 00:12:57.878 { 00:12:57.878 "name": "nvmf_tgt_poll_group_003", 00:12:57.878 "admin_qpairs": 0, 00:12:57.878 "io_qpairs": 0, 00:12:57.878 "current_admin_qpairs": 0, 00:12:57.878 "current_io_qpairs": 0, 00:12:57.878 "pending_bdev_io": 0, 00:12:57.878 "completed_nvme_io": 0, 00:12:57.878 "transports": [ 00:12:57.878 { 00:12:57.878 "trtype": "TCP" 00:12:57.878 } 00:12:57.878 ] 00:12:57.878 } 00:12:57.878 ] 00:12:57.878 }' 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.878 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.137 Malloc1 00:12:58.137 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.137 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:58.137 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.137 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.137 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.138 [2024-11-28 08:11:40.176811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:58.138 [2024-11-28 08:11:40.209482] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:58.138 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:58.138 could not add new controller: failed to write to nvme-fabrics device 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.138 08:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.516 08:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.516 08:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.516 08:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.516 08:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:59.516 08:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:01.422 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.423 [2024-11-28 08:11:43.536282] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:13:01.423 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:01.423 could not add new controller: failed to write to nvme-fabrics device 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.423 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.799 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.799 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:02.799 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.799 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:02.799 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:04.701 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:04.701 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:04.701 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.701 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.702 [2024-11-28 08:11:46.858606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.702 08:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.078 08:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.078 08:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:06.078 08:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.078 08:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:06.078 08:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.984 [2024-11-28 08:11:50.212070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.984 08:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.360 08:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.360 08:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:09.360 08:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.360 08:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:09.360 08:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.263 [2024-11-28 08:11:53.469325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.263 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.641 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.641 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:12.641 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.641 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:12.641 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.555 [2024-11-28 08:11:56.723405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.555 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.931 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.931 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:15.931 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.931 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:15.931 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:17.833 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:17.833 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:17.833 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.833 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.834 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.834 [2024-11-28 08:12:00.032674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.834 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.210 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.210 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:19.210 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.210 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:19.210 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:21.112 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 [2024-11-28 08:12:03.433821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 [2024-11-28 08:12:03.481931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 [2024-11-28 08:12:03.530074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.371 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 [2024-11-28 08:12:03.578237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.372 [2024-11-28 08:12:03.626402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.372 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:21.630 "tick_rate": 2300000000, 00:13:21.630 "poll_groups": [ 00:13:21.630 { 00:13:21.630 "name": "nvmf_tgt_poll_group_000", 00:13:21.630 "admin_qpairs": 2, 00:13:21.630 "io_qpairs": 168, 00:13:21.630 "current_admin_qpairs": 0, 00:13:21.630 "current_io_qpairs": 0, 00:13:21.630 "pending_bdev_io": 0, 00:13:21.630 "completed_nvme_io": 219, 00:13:21.630 "transports": [ 00:13:21.630 { 00:13:21.630 "trtype": "TCP" 00:13:21.630 } 00:13:21.630 ] 00:13:21.630 }, 00:13:21.630 { 00:13:21.630 "name": "nvmf_tgt_poll_group_001", 00:13:21.630 "admin_qpairs": 2, 00:13:21.630 "io_qpairs": 168, 00:13:21.630 "current_admin_qpairs": 0, 00:13:21.630 "current_io_qpairs": 0, 00:13:21.630 "pending_bdev_io": 0, 00:13:21.630 "completed_nvme_io": 317, 00:13:21.630 "transports": [ 00:13:21.630 { 00:13:21.630 "trtype": "TCP" 00:13:21.630 } 00:13:21.630 ] 00:13:21.630 }, 00:13:21.630 { 00:13:21.630 "name": "nvmf_tgt_poll_group_002", 00:13:21.630 "admin_qpairs": 1, 00:13:21.630 "io_qpairs": 168, 00:13:21.630 "current_admin_qpairs": 0, 00:13:21.630 "current_io_qpairs": 0, 00:13:21.630 "pending_bdev_io": 0, 00:13:21.630 "completed_nvme_io": 220, 00:13:21.630 "transports": [ 00:13:21.630 { 00:13:21.630 "trtype": "TCP" 00:13:21.630 } 00:13:21.630 ] 00:13:21.630 }, 00:13:21.630 { 00:13:21.630 "name": "nvmf_tgt_poll_group_003", 00:13:21.630 "admin_qpairs": 2, 00:13:21.630 "io_qpairs": 168, 00:13:21.630 "current_admin_qpairs": 0, 00:13:21.630 "current_io_qpairs": 0, 00:13:21.630 "pending_bdev_io": 0, 00:13:21.630 "completed_nvme_io": 266, 00:13:21.630 "transports": [ 00:13:21.630 { 00:13:21.630 "trtype": "TCP" 00:13:21.630 } 00:13:21.630 ] 00:13:21.630 } 00:13:21.630 ] 00:13:21.630 }' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:21.630 rmmod nvme_tcp 00:13:21.630 rmmod nvme_fabrics 00:13:21.630 rmmod nvme_keyring 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1294998 ']' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1294998 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1294998 ']' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1294998 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.630 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1294998 00:13:21.889 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.889 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.889 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1294998' 00:13:21.889 killing process with pid 1294998 00:13:21.889 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1294998 00:13:21.889 08:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1294998 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.889 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:24.464 00:13:24.464 real 0m32.255s 00:13:24.464 user 1m38.451s 00:13:24.464 sys 0m6.130s 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.464 ************************************ 00:13:24.464 END TEST nvmf_rpc 00:13:24.464 ************************************ 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.464 ************************************ 00:13:24.464 START TEST nvmf_invalid 00:13:24.464 ************************************ 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:24.464 * Looking for test storage... 00:13:24.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:24.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.464 --rc genhtml_branch_coverage=1 00:13:24.464 --rc genhtml_function_coverage=1 00:13:24.464 --rc genhtml_legend=1 00:13:24.464 --rc geninfo_all_blocks=1 00:13:24.464 --rc geninfo_unexecuted_blocks=1 00:13:24.464 00:13:24.464 ' 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:24.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.464 --rc genhtml_branch_coverage=1 00:13:24.464 --rc genhtml_function_coverage=1 00:13:24.464 --rc genhtml_legend=1 00:13:24.464 --rc geninfo_all_blocks=1 00:13:24.464 --rc geninfo_unexecuted_blocks=1 00:13:24.464 00:13:24.464 ' 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:24.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.464 --rc genhtml_branch_coverage=1 00:13:24.464 --rc genhtml_function_coverage=1 00:13:24.464 --rc genhtml_legend=1 00:13:24.464 --rc geninfo_all_blocks=1 00:13:24.464 --rc geninfo_unexecuted_blocks=1 00:13:24.464 00:13:24.464 ' 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:24.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.464 --rc genhtml_branch_coverage=1 00:13:24.464 --rc genhtml_function_coverage=1 00:13:24.464 --rc genhtml_legend=1 00:13:24.464 --rc geninfo_all_blocks=1 00:13:24.464 --rc geninfo_unexecuted_blocks=1 00:13:24.464 00:13:24.464 ' 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.464 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:24.465 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:29.741 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:29.741 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:29.741 Found net devices under 0000:86:00.0: cvl_0_0 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:29.741 Found net devices under 0000:86:00.1: cvl_0_1 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.741 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.742 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.742 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:29.742 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:13:30.001 00:13:30.001 --- 10.0.0.2 ping statistics --- 00:13:30.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.001 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:13:30.001 00:13:30.001 --- 10.0.0.1 ping statistics --- 00:13:30.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.001 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1303127 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1303127 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1303127 ']' 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.001 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:30.001 [2024-11-28 08:12:12.127466] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:13:30.001 [2024-11-28 08:12:12.127509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.001 [2024-11-28 08:12:12.193557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.001 [2024-11-28 08:12:12.237203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.001 [2024-11-28 08:12:12.237239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.001 [2024-11-28 08:12:12.237246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.001 [2024-11-28 08:12:12.237252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.001 [2024-11-28 08:12:12.237257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.001 [2024-11-28 08:12:12.238689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.001 [2024-11-28 08:12:12.238787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.001 [2024-11-28 08:12:12.238874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.001 [2024-11-28 08:12:12.238876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.261 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.261 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:30.261 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:30.261 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:30.261 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:30.261 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.261 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:30.261 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16018 00:13:30.520 [2024-11-28 08:12:12.545251] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:30.520 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:30.520 { 00:13:30.520 "nqn": "nqn.2016-06.io.spdk:cnode16018", 00:13:30.520 "tgt_name": "foobar", 00:13:30.520 "method": "nvmf_create_subsystem", 00:13:30.520 "req_id": 1 00:13:30.520 } 00:13:30.520 Got JSON-RPC error response 00:13:30.520 response: 00:13:30.520 { 00:13:30.520 "code": -32603, 00:13:30.520 "message": "Unable to find target foobar" 00:13:30.520 }' 00:13:30.520 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:30.520 { 00:13:30.520 "nqn": "nqn.2016-06.io.spdk:cnode16018", 00:13:30.520 "tgt_name": "foobar", 00:13:30.520 "method": "nvmf_create_subsystem", 00:13:30.520 "req_id": 1 00:13:30.520 } 00:13:30.520 Got JSON-RPC error response 00:13:30.520 response: 00:13:30.520 { 00:13:30.520 "code": -32603, 00:13:30.520 "message": "Unable to find target foobar" 00:13:30.520 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:30.520 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:30.520 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19102 00:13:30.520 [2024-11-28 08:12:12.753976] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19102: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:30.520 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:30.520 { 00:13:30.520 "nqn": "nqn.2016-06.io.spdk:cnode19102", 00:13:30.520 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.520 "method": "nvmf_create_subsystem", 00:13:30.520 "req_id": 1 00:13:30.520 } 00:13:30.520 Got JSON-RPC error response 00:13:30.520 response: 00:13:30.520 { 00:13:30.520 "code": -32602, 00:13:30.520 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.520 }' 00:13:30.520 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:30.520 { 00:13:30.520 "nqn": "nqn.2016-06.io.spdk:cnode19102", 00:13:30.520 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.520 "method": "nvmf_create_subsystem", 00:13:30.520 "req_id": 1 00:13:30.520 } 00:13:30.520 Got JSON-RPC error response 00:13:30.520 response: 00:13:30.520 { 00:13:30.520 "code": -32602, 00:13:30.520 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.520 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.779 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:30.779 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17386 00:13:30.779 [2024-11-28 08:12:12.962669] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17386: invalid model number 'SPDK_Controller' 00:13:30.779 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:30.779 { 00:13:30.779 "nqn": "nqn.2016-06.io.spdk:cnode17386", 00:13:30.779 "model_number": "SPDK_Controller\u001f", 00:13:30.779 "method": "nvmf_create_subsystem", 00:13:30.779 "req_id": 1 00:13:30.779 } 00:13:30.779 Got JSON-RPC error response 00:13:30.779 response: 00:13:30.779 { 00:13:30.779 "code": -32602, 00:13:30.779 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.779 }' 00:13:30.779 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:30.779 { 00:13:30.779 "nqn": "nqn.2016-06.io.spdk:cnode17386", 00:13:30.779 "model_number": "SPDK_Controller\u001f", 00:13:30.779 "method": "nvmf_create_subsystem", 00:13:30.779 "req_id": 1 00:13:30.779 } 00:13:30.779 Got JSON-RPC error response 00:13:30.779 response: 00:13:30.779 { 00:13:30.779 "code": -32602, 00:13:30.779 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.779 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:30.779 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:30.779 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:30.779 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.779 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:30.779 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:30.779 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.779 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.779 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:31.038 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'H60TwW7:W)|>mqBNZg6z9' 00:13:31.039 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'H60TwW7:W)|>mqBNZg6z9' nqn.2016-06.io.spdk:cnode16417 00:13:31.298 [2024-11-28 08:12:13.315885] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16417: invalid serial number 'H60TwW7:W)|>mqBNZg6z9' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:31.298 { 00:13:31.298 "nqn": "nqn.2016-06.io.spdk:cnode16417", 00:13:31.298 "serial_number": "H60TwW7:W)|>mqBNZg6z9", 00:13:31.298 "method": "nvmf_create_subsystem", 00:13:31.298 "req_id": 1 00:13:31.298 } 00:13:31.298 Got JSON-RPC error response 00:13:31.298 response: 00:13:31.298 { 00:13:31.298 "code": -32602, 00:13:31.298 "message": "Invalid SN H60TwW7:W)|>mqBNZg6z9" 00:13:31.298 }' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:31.298 { 00:13:31.298 "nqn": "nqn.2016-06.io.spdk:cnode16417", 00:13:31.298 "serial_number": "H60TwW7:W)|>mqBNZg6z9", 00:13:31.298 "method": "nvmf_create_subsystem", 00:13:31.298 "req_id": 1 00:13:31.298 } 00:13:31.298 Got JSON-RPC error response 00:13:31.298 response: 00:13:31.298 { 00:13:31.298 "code": -32602, 00:13:31.298 "message": "Invalid SN H60TwW7:W)|>mqBNZg6z9" 00:13:31.298 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.298 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.299 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.300 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ( == \- ]] 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '($)!+P%.y+pA]*G9j!rPsp.6'\''Sc 0SW+qbW!B j`Q' 00:13:31.558 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '($)!+P%.y+pA]*G9j!rPsp.6'\''Sc 0SW+qbW!B j`Q' nqn.2016-06.io.spdk:cnode4333 00:13:31.558 [2024-11-28 08:12:13.797450] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4333: invalid model number '($)!+P%.y+pA]*G9j!rPsp.6'Sc 0SW+qbW!B j`Q' 00:13:31.817 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:31.817 { 00:13:31.817 "nqn": "nqn.2016-06.io.spdk:cnode4333", 00:13:31.817 "model_number": "($)!+P%.y+pA]*G9j!rPsp.6'\''Sc 0SW+qbW!B j`Q", 00:13:31.817 "method": "nvmf_create_subsystem", 00:13:31.817 "req_id": 1 00:13:31.817 } 00:13:31.817 Got JSON-RPC error response 00:13:31.817 response: 00:13:31.817 { 00:13:31.817 "code": -32602, 00:13:31.817 "message": "Invalid MN ($)!+P%.y+pA]*G9j!rPsp.6'\''Sc 0SW+qbW!B j`Q" 00:13:31.817 }' 00:13:31.817 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:31.817 { 00:13:31.817 "nqn": "nqn.2016-06.io.spdk:cnode4333", 00:13:31.817 "model_number": "($)!+P%.y+pA]*G9j!rPsp.6'Sc 0SW+qbW!B j`Q", 00:13:31.817 "method": "nvmf_create_subsystem", 00:13:31.817 "req_id": 1 00:13:31.817 } 00:13:31.817 Got JSON-RPC error response 00:13:31.817 response: 00:13:31.817 { 00:13:31.817 "code": -32602, 00:13:31.817 "message": "Invalid MN ($)!+P%.y+pA]*G9j!rPsp.6'Sc 0SW+qbW!B j`Q" 00:13:31.817 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:31.817 08:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:31.817 [2024-11-28 08:12:13.998190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.817 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:32.076 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:32.076 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:32.076 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:32.076 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:32.076 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:32.335 [2024-11-28 08:12:14.411562] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:32.335 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:32.335 { 00:13:32.335 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:32.335 "listen_address": { 00:13:32.335 "trtype": "tcp", 00:13:32.335 "traddr": "", 00:13:32.335 "trsvcid": "4421" 00:13:32.335 }, 00:13:32.335 "method": "nvmf_subsystem_remove_listener", 00:13:32.335 "req_id": 1 00:13:32.335 } 00:13:32.335 Got JSON-RPC error response 00:13:32.335 response: 00:13:32.335 { 00:13:32.335 "code": -32602, 00:13:32.335 "message": "Invalid parameters" 00:13:32.335 }' 00:13:32.335 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:32.335 { 00:13:32.335 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:32.335 "listen_address": { 00:13:32.335 "trtype": "tcp", 00:13:32.335 "traddr": "", 00:13:32.335 "trsvcid": "4421" 00:13:32.335 }, 00:13:32.335 "method": "nvmf_subsystem_remove_listener", 00:13:32.335 "req_id": 1 00:13:32.335 } 00:13:32.335 Got JSON-RPC error response 00:13:32.335 response: 00:13:32.335 { 00:13:32.335 "code": -32602, 00:13:32.335 "message": "Invalid parameters" 00:13:32.335 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:32.335 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8559 -i 0 00:13:32.594 [2024-11-28 08:12:14.624242] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8559: invalid cntlid range [0-65519] 00:13:32.594 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:32.594 { 00:13:32.594 "nqn": "nqn.2016-06.io.spdk:cnode8559", 00:13:32.594 "min_cntlid": 0, 00:13:32.594 "method": "nvmf_create_subsystem", 00:13:32.594 "req_id": 1 00:13:32.594 } 00:13:32.594 Got JSON-RPC error response 00:13:32.594 response: 00:13:32.594 { 00:13:32.594 "code": -32602, 00:13:32.594 "message": "Invalid cntlid range [0-65519]" 00:13:32.594 }' 00:13:32.594 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:32.594 { 00:13:32.594 "nqn": "nqn.2016-06.io.spdk:cnode8559", 00:13:32.594 "min_cntlid": 0, 00:13:32.594 "method": "nvmf_create_subsystem", 00:13:32.594 "req_id": 1 00:13:32.594 } 00:13:32.594 Got JSON-RPC error response 00:13:32.594 response: 00:13:32.594 { 00:13:32.594 "code": -32602, 00:13:32.594 "message": "Invalid cntlid range [0-65519]" 00:13:32.594 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.594 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16020 -i 65520 00:13:32.594 [2024-11-28 08:12:14.836970] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16020: invalid cntlid range [65520-65519] 00:13:32.853 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:32.853 { 00:13:32.853 "nqn": "nqn.2016-06.io.spdk:cnode16020", 00:13:32.853 "min_cntlid": 65520, 00:13:32.853 "method": "nvmf_create_subsystem", 00:13:32.853 "req_id": 1 00:13:32.853 } 00:13:32.853 Got JSON-RPC error response 00:13:32.853 response: 00:13:32.853 { 00:13:32.853 "code": -32602, 00:13:32.853 "message": "Invalid cntlid range [65520-65519]" 00:13:32.853 }' 00:13:32.853 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:32.853 { 00:13:32.853 "nqn": "nqn.2016-06.io.spdk:cnode16020", 00:13:32.853 "min_cntlid": 65520, 00:13:32.853 "method": "nvmf_create_subsystem", 00:13:32.853 "req_id": 1 00:13:32.853 } 00:13:32.854 Got JSON-RPC error response 00:13:32.854 response: 00:13:32.854 { 00:13:32.854 "code": -32602, 00:13:32.854 "message": "Invalid cntlid range [65520-65519]" 00:13:32.854 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.854 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8660 -I 0 00:13:32.854 [2024-11-28 08:12:15.041664] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8660: invalid cntlid range [1-0] 00:13:32.854 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:32.854 { 00:13:32.854 "nqn": "nqn.2016-06.io.spdk:cnode8660", 00:13:32.854 "max_cntlid": 0, 00:13:32.854 "method": "nvmf_create_subsystem", 00:13:32.854 "req_id": 1 00:13:32.854 } 00:13:32.854 Got JSON-RPC error response 00:13:32.854 response: 00:13:32.854 { 00:13:32.854 "code": -32602, 00:13:32.854 "message": "Invalid cntlid range [1-0]" 00:13:32.854 }' 00:13:32.854 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:32.854 { 00:13:32.854 "nqn": "nqn.2016-06.io.spdk:cnode8660", 00:13:32.854 "max_cntlid": 0, 00:13:32.854 "method": "nvmf_create_subsystem", 00:13:32.854 "req_id": 1 00:13:32.854 } 00:13:32.854 Got JSON-RPC error response 00:13:32.854 response: 00:13:32.854 { 00:13:32.854 "code": -32602, 00:13:32.854 "message": "Invalid cntlid range [1-0]" 00:13:32.854 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.854 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14990 -I 65520 00:13:33.112 [2024-11-28 08:12:15.234317] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14990: invalid cntlid range [1-65520] 00:13:33.112 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:33.112 { 00:13:33.112 "nqn": "nqn.2016-06.io.spdk:cnode14990", 00:13:33.112 "max_cntlid": 65520, 00:13:33.112 "method": "nvmf_create_subsystem", 00:13:33.112 "req_id": 1 00:13:33.112 } 00:13:33.112 Got JSON-RPC error response 00:13:33.112 response: 00:13:33.112 { 00:13:33.112 "code": -32602, 00:13:33.112 "message": "Invalid cntlid range [1-65520]" 00:13:33.112 }' 00:13:33.112 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:33.112 { 00:13:33.112 "nqn": "nqn.2016-06.io.spdk:cnode14990", 00:13:33.112 "max_cntlid": 65520, 00:13:33.112 "method": "nvmf_create_subsystem", 00:13:33.112 "req_id": 1 00:13:33.112 } 00:13:33.112 Got JSON-RPC error response 00:13:33.112 response: 00:13:33.112 { 00:13:33.113 "code": -32602, 00:13:33.113 "message": "Invalid cntlid range [1-65520]" 00:13:33.113 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:33.113 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11317 -i 6 -I 5 00:13:33.372 [2024-11-28 08:12:15.431019] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11317: invalid cntlid range [6-5] 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:33.372 { 00:13:33.372 "nqn": "nqn.2016-06.io.spdk:cnode11317", 00:13:33.372 "min_cntlid": 6, 00:13:33.372 "max_cntlid": 5, 00:13:33.372 "method": "nvmf_create_subsystem", 00:13:33.372 "req_id": 1 00:13:33.372 } 00:13:33.372 Got JSON-RPC error response 00:13:33.372 response: 00:13:33.372 { 00:13:33.372 "code": -32602, 00:13:33.372 "message": "Invalid cntlid range [6-5]" 00:13:33.372 }' 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:33.372 { 00:13:33.372 "nqn": "nqn.2016-06.io.spdk:cnode11317", 00:13:33.372 "min_cntlid": 6, 00:13:33.372 "max_cntlid": 5, 00:13:33.372 "method": "nvmf_create_subsystem", 00:13:33.372 "req_id": 1 00:13:33.372 } 00:13:33.372 Got JSON-RPC error response 00:13:33.372 response: 00:13:33.372 { 00:13:33.372 "code": -32602, 00:13:33.372 "message": "Invalid cntlid range [6-5]" 00:13:33.372 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:33.372 { 00:13:33.372 "name": "foobar", 00:13:33.372 "method": "nvmf_delete_target", 00:13:33.372 "req_id": 1 00:13:33.372 } 00:13:33.372 Got JSON-RPC error response 00:13:33.372 response: 00:13:33.372 { 00:13:33.372 "code": -32602, 00:13:33.372 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:33.372 }' 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:33.372 { 00:13:33.372 "name": "foobar", 00:13:33.372 "method": "nvmf_delete_target", 00:13:33.372 "req_id": 1 00:13:33.372 } 00:13:33.372 Got JSON-RPC error response 00:13:33.372 response: 00:13:33.372 { 00:13:33.372 "code": -32602, 00:13:33.372 "message": "The specified target doesn't exist, cannot delete it." 00:13:33.372 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.372 rmmod nvme_tcp 00:13:33.372 rmmod nvme_fabrics 00:13:33.372 rmmod nvme_keyring 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1303127 ']' 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1303127 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1303127 ']' 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1303127 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.372 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1303127 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1303127' 00:13:33.631 killing process with pid 1303127 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1303127 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1303127 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.631 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.170 08:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.170 00:13:36.170 real 0m11.692s 00:13:36.170 user 0m18.543s 00:13:36.170 sys 0m5.212s 00:13:36.170 08:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.170 08:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:36.170 ************************************ 00:13:36.170 END TEST nvmf_invalid 00:13:36.170 ************************************ 00:13:36.170 08:12:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:36.170 08:12:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.170 08:12:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.170 08:12:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.170 ************************************ 00:13:36.170 START TEST nvmf_connect_stress 00:13:36.170 ************************************ 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:36.170 * Looking for test storage... 00:13:36.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:36.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.170 --rc genhtml_branch_coverage=1 00:13:36.170 --rc genhtml_function_coverage=1 00:13:36.170 --rc genhtml_legend=1 00:13:36.170 --rc geninfo_all_blocks=1 00:13:36.170 --rc geninfo_unexecuted_blocks=1 00:13:36.170 00:13:36.170 ' 00:13:36.170 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:36.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.170 --rc genhtml_branch_coverage=1 00:13:36.170 --rc genhtml_function_coverage=1 00:13:36.170 --rc genhtml_legend=1 00:13:36.170 --rc geninfo_all_blocks=1 00:13:36.171 --rc geninfo_unexecuted_blocks=1 00:13:36.171 00:13:36.171 ' 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:36.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.171 --rc genhtml_branch_coverage=1 00:13:36.171 --rc genhtml_function_coverage=1 00:13:36.171 --rc genhtml_legend=1 00:13:36.171 --rc geninfo_all_blocks=1 00:13:36.171 --rc geninfo_unexecuted_blocks=1 00:13:36.171 00:13:36.171 ' 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:36.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.171 --rc genhtml_branch_coverage=1 00:13:36.171 --rc genhtml_function_coverage=1 00:13:36.171 --rc genhtml_legend=1 00:13:36.171 --rc geninfo_all_blocks=1 00:13:36.171 --rc geninfo_unexecuted_blocks=1 00:13:36.171 00:13:36.171 ' 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:36.171 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:41.449 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:41.450 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:41.450 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:41.450 Found net devices under 0000:86:00.0: cvl_0_0 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:41.450 Found net devices under 0000:86:00.1: cvl_0_1 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.450 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.710 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.710 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.710 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:41.710 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.710 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.710 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.710 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:41.710 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:41.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:13:41.710 00:13:41.710 --- 10.0.0.2 ping statistics --- 00:13:41.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.710 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:13:41.710 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:13:41.710 00:13:41.710 --- 10.0.0.1 ping statistics --- 00:13:41.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.711 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1307295 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1307295 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1307295 ']' 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.711 08:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.970 [2024-11-28 08:12:24.023760] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:13:41.970 [2024-11-28 08:12:24.023808] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.970 [2024-11-28 08:12:24.089751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.970 [2024-11-28 08:12:24.129669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.970 [2024-11-28 08:12:24.129708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.970 [2024-11-28 08:12:24.129718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.970 [2024-11-28 08:12:24.129726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.970 [2024-11-28 08:12:24.129733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.970 [2024-11-28 08:12:24.131151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.970 [2024-11-28 08:12:24.131219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.970 [2024-11-28 08:12:24.131221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.970 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.970 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:41.970 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:41.970 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:41.970 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.230 [2024-11-28 08:12:24.276829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.230 [2024-11-28 08:12:24.297071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.230 NULL1 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1307376 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.230 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.231 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.490 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.490 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:42.490 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.490 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.490 08:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.058 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.058 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:43.058 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.058 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.058 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.317 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.317 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:43.317 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.317 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.317 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.576 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.576 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:43.576 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.576 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.576 08:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.835 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.835 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:43.835 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.835 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.835 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.095 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.095 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:44.095 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.095 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.095 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.663 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.663 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:44.663 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.663 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.663 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.922 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.922 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:44.922 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.922 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.922 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.181 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.181 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:45.181 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.181 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.181 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.441 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.441 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:45.441 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.441 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.441 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.008 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.008 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:46.008 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.008 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.008 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.268 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.268 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:46.268 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.268 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.268 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.527 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.527 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:46.527 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.527 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.527 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.786 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.786 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:46.786 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.786 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.786 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.046 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.046 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:47.046 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.046 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.046 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.616 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.616 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:47.616 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.616 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.616 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.876 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.876 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:47.876 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.876 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.876 08:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.135 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.135 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:48.135 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.135 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.135 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.394 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.394 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:48.394 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.395 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.395 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.654 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.654 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:48.654 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.654 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.654 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.224 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.224 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:49.224 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.224 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.224 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.483 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.483 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:49.483 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.483 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.483 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.743 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.743 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:49.743 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.743 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.743 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.002 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.002 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:50.002 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.002 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.002 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.570 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.570 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:50.570 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.570 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.570 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.829 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.829 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:50.829 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.829 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.829 08:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.088 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.088 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:51.088 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.088 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.088 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.347 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.347 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:51.347 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.347 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.347 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.606 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.606 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:51.606 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.606 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.606 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.174 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.174 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:52.174 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.174 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.174 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.174 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.433 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.433 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1307376 00:13:52.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1307376) - No such process 00:13:52.433 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1307376 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.434 rmmod nvme_tcp 00:13:52.434 rmmod nvme_fabrics 00:13:52.434 rmmod nvme_keyring 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1307295 ']' 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1307295 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1307295 ']' 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1307295 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1307295 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1307295' 00:13:52.434 killing process with pid 1307295 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1307295 00:13:52.434 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1307295 00:13:52.693 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.693 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.694 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.694 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:52.694 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:52.694 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.694 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.694 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.694 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.694 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.694 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.694 08:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.229 08:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:55.229 00:13:55.229 real 0m18.872s 00:13:55.229 user 0m39.437s 00:13:55.229 sys 0m8.377s 00:13:55.229 08:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.229 08:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.229 ************************************ 00:13:55.229 END TEST nvmf_connect_stress 00:13:55.229 ************************************ 00:13:55.229 08:12:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:55.229 08:12:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:55.229 08:12:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.229 08:12:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:55.229 ************************************ 00:13:55.229 START TEST nvmf_fused_ordering 00:13:55.229 ************************************ 00:13:55.229 08:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:55.229 * Looking for test storage... 00:13:55.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:55.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.229 --rc genhtml_branch_coverage=1 00:13:55.229 --rc genhtml_function_coverage=1 00:13:55.229 --rc genhtml_legend=1 00:13:55.229 --rc geninfo_all_blocks=1 00:13:55.229 --rc geninfo_unexecuted_blocks=1 00:13:55.229 00:13:55.229 ' 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:55.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.229 --rc genhtml_branch_coverage=1 00:13:55.229 --rc genhtml_function_coverage=1 00:13:55.229 --rc genhtml_legend=1 00:13:55.229 --rc geninfo_all_blocks=1 00:13:55.229 --rc geninfo_unexecuted_blocks=1 00:13:55.229 00:13:55.229 ' 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:55.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.229 --rc genhtml_branch_coverage=1 00:13:55.229 --rc genhtml_function_coverage=1 00:13:55.229 --rc genhtml_legend=1 00:13:55.229 --rc geninfo_all_blocks=1 00:13:55.229 --rc geninfo_unexecuted_blocks=1 00:13:55.229 00:13:55.229 ' 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:55.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.229 --rc genhtml_branch_coverage=1 00:13:55.229 --rc genhtml_function_coverage=1 00:13:55.229 --rc genhtml_legend=1 00:13:55.229 --rc geninfo_all_blocks=1 00:13:55.229 --rc geninfo_unexecuted_blocks=1 00:13:55.229 00:13:55.229 ' 00:13:55.229 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:55.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:55.230 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:00.509 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:00.509 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:00.509 Found net devices under 0000:86:00.0: cvl_0_0 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.509 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:00.510 Found net devices under 0000:86:00.1: cvl_0_1 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:14:00.510 00:14:00.510 --- 10.0.0.2 ping statistics --- 00:14:00.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.510 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:14:00.510 00:14:00.510 --- 10.0.0.1 ping statistics --- 00:14:00.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.510 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1312645 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1312645 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1312645 ']' 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.510 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:00.770 [2024-11-28 08:12:42.822722] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:14:00.770 [2024-11-28 08:12:42.822771] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.770 [2024-11-28 08:12:42.888791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.770 [2024-11-28 08:12:42.930094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.770 [2024-11-28 08:12:42.930142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.770 [2024-11-28 08:12:42.930151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.770 [2024-11-28 08:12:42.930159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.770 [2024-11-28 08:12:42.930166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.770 [2024-11-28 08:12:42.930787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.770 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.770 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:00.770 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:00.770 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:00.770 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.029 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.029 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.030 [2024-11-28 08:12:43.068226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.030 [2024-11-28 08:12:43.084420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.030 NULL1 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.030 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:01.030 [2024-11-28 08:12:43.140452] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:14:01.030 [2024-11-28 08:12:43.140498] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312713 ] 00:14:01.599 Attached to nqn.2016-06.io.spdk:cnode1 00:14:01.599 Namespace ID: 1 size: 1GB 00:14:01.599 fused_ordering(0) 00:14:01.599 fused_ordering(1) 00:14:01.599 fused_ordering(2) 00:14:01.599 fused_ordering(3) 00:14:01.599 fused_ordering(4) 00:14:01.599 fused_ordering(5) 00:14:01.599 fused_ordering(6) 00:14:01.599 fused_ordering(7) 00:14:01.599 fused_ordering(8) 00:14:01.599 fused_ordering(9) 00:14:01.599 fused_ordering(10) 00:14:01.599 fused_ordering(11) 00:14:01.599 fused_ordering(12) 00:14:01.599 fused_ordering(13) 00:14:01.599 fused_ordering(14) 00:14:01.599 fused_ordering(15) 00:14:01.599 fused_ordering(16) 00:14:01.599 fused_ordering(17) 00:14:01.599 fused_ordering(18) 00:14:01.599 fused_ordering(19) 00:14:01.599 fused_ordering(20) 00:14:01.599 fused_ordering(21) 00:14:01.599 fused_ordering(22) 00:14:01.599 fused_ordering(23) 00:14:01.599 fused_ordering(24) 00:14:01.599 fused_ordering(25) 00:14:01.599 fused_ordering(26) 00:14:01.599 fused_ordering(27) 00:14:01.599 fused_ordering(28) 00:14:01.600 fused_ordering(29) 00:14:01.600 fused_ordering(30) 00:14:01.600 fused_ordering(31) 00:14:01.600 fused_ordering(32) 00:14:01.600 fused_ordering(33) 00:14:01.600 fused_ordering(34) 00:14:01.600 fused_ordering(35) 00:14:01.600 fused_ordering(36) 00:14:01.600 fused_ordering(37) 00:14:01.600 fused_ordering(38) 00:14:01.600 fused_ordering(39) 00:14:01.600 fused_ordering(40) 00:14:01.600 fused_ordering(41) 00:14:01.600 fused_ordering(42) 00:14:01.600 fused_ordering(43) 00:14:01.600 fused_ordering(44) 00:14:01.600 fused_ordering(45) 00:14:01.600 fused_ordering(46) 00:14:01.600 fused_ordering(47) 00:14:01.600 fused_ordering(48) 00:14:01.600 fused_ordering(49) 00:14:01.600 fused_ordering(50) 00:14:01.600 fused_ordering(51) 00:14:01.600 fused_ordering(52) 00:14:01.600 fused_ordering(53) 00:14:01.600 fused_ordering(54) 00:14:01.600 fused_ordering(55) 00:14:01.600 fused_ordering(56) 00:14:01.600 fused_ordering(57) 00:14:01.600 fused_ordering(58) 00:14:01.600 fused_ordering(59) 00:14:01.600 fused_ordering(60) 00:14:01.600 fused_ordering(61) 00:14:01.600 fused_ordering(62) 00:14:01.600 fused_ordering(63) 00:14:01.600 fused_ordering(64) 00:14:01.600 fused_ordering(65) 00:14:01.600 fused_ordering(66) 00:14:01.600 fused_ordering(67) 00:14:01.600 fused_ordering(68) 00:14:01.600 fused_ordering(69) 00:14:01.600 fused_ordering(70) 00:14:01.600 fused_ordering(71) 00:14:01.600 fused_ordering(72) 00:14:01.600 fused_ordering(73) 00:14:01.600 fused_ordering(74) 00:14:01.600 fused_ordering(75) 00:14:01.600 fused_ordering(76) 00:14:01.600 fused_ordering(77) 00:14:01.600 fused_ordering(78) 00:14:01.600 fused_ordering(79) 00:14:01.600 fused_ordering(80) 00:14:01.600 fused_ordering(81) 00:14:01.600 fused_ordering(82) 00:14:01.600 fused_ordering(83) 00:14:01.600 fused_ordering(84) 00:14:01.600 fused_ordering(85) 00:14:01.600 fused_ordering(86) 00:14:01.600 fused_ordering(87) 00:14:01.600 fused_ordering(88) 00:14:01.600 fused_ordering(89) 00:14:01.600 fused_ordering(90) 00:14:01.600 fused_ordering(91) 00:14:01.600 fused_ordering(92) 00:14:01.600 fused_ordering(93) 00:14:01.600 fused_ordering(94) 00:14:01.600 fused_ordering(95) 00:14:01.600 fused_ordering(96) 00:14:01.600 fused_ordering(97) 00:14:01.600 fused_ordering(98) 00:14:01.600 fused_ordering(99) 00:14:01.600 fused_ordering(100) 00:14:01.600 fused_ordering(101) 00:14:01.600 fused_ordering(102) 00:14:01.600 fused_ordering(103) 00:14:01.600 fused_ordering(104) 00:14:01.600 fused_ordering(105) 00:14:01.600 fused_ordering(106) 00:14:01.600 fused_ordering(107) 00:14:01.600 fused_ordering(108) 00:14:01.600 fused_ordering(109) 00:14:01.600 fused_ordering(110) 00:14:01.600 fused_ordering(111) 00:14:01.600 fused_ordering(112) 00:14:01.600 fused_ordering(113) 00:14:01.600 fused_ordering(114) 00:14:01.600 fused_ordering(115) 00:14:01.600 fused_ordering(116) 00:14:01.600 fused_ordering(117) 00:14:01.600 fused_ordering(118) 00:14:01.600 fused_ordering(119) 00:14:01.600 fused_ordering(120) 00:14:01.600 fused_ordering(121) 00:14:01.600 fused_ordering(122) 00:14:01.600 fused_ordering(123) 00:14:01.600 fused_ordering(124) 00:14:01.600 fused_ordering(125) 00:14:01.600 fused_ordering(126) 00:14:01.600 fused_ordering(127) 00:14:01.600 fused_ordering(128) 00:14:01.600 fused_ordering(129) 00:14:01.600 fused_ordering(130) 00:14:01.600 fused_ordering(131) 00:14:01.600 fused_ordering(132) 00:14:01.600 fused_ordering(133) 00:14:01.600 fused_ordering(134) 00:14:01.600 fused_ordering(135) 00:14:01.600 fused_ordering(136) 00:14:01.600 fused_ordering(137) 00:14:01.600 fused_ordering(138) 00:14:01.600 fused_ordering(139) 00:14:01.600 fused_ordering(140) 00:14:01.600 fused_ordering(141) 00:14:01.600 fused_ordering(142) 00:14:01.600 fused_ordering(143) 00:14:01.600 fused_ordering(144) 00:14:01.600 fused_ordering(145) 00:14:01.600 fused_ordering(146) 00:14:01.600 fused_ordering(147) 00:14:01.600 fused_ordering(148) 00:14:01.600 fused_ordering(149) 00:14:01.600 fused_ordering(150) 00:14:01.600 fused_ordering(151) 00:14:01.600 fused_ordering(152) 00:14:01.600 fused_ordering(153) 00:14:01.600 fused_ordering(154) 00:14:01.600 fused_ordering(155) 00:14:01.600 fused_ordering(156) 00:14:01.600 fused_ordering(157) 00:14:01.600 fused_ordering(158) 00:14:01.600 fused_ordering(159) 00:14:01.600 fused_ordering(160) 00:14:01.600 fused_ordering(161) 00:14:01.600 fused_ordering(162) 00:14:01.600 fused_ordering(163) 00:14:01.600 fused_ordering(164) 00:14:01.600 fused_ordering(165) 00:14:01.600 fused_ordering(166) 00:14:01.600 fused_ordering(167) 00:14:01.600 fused_ordering(168) 00:14:01.600 fused_ordering(169) 00:14:01.600 fused_ordering(170) 00:14:01.600 fused_ordering(171) 00:14:01.600 fused_ordering(172) 00:14:01.600 fused_ordering(173) 00:14:01.600 fused_ordering(174) 00:14:01.600 fused_ordering(175) 00:14:01.600 fused_ordering(176) 00:14:01.600 fused_ordering(177) 00:14:01.600 fused_ordering(178) 00:14:01.600 fused_ordering(179) 00:14:01.600 fused_ordering(180) 00:14:01.600 fused_ordering(181) 00:14:01.600 fused_ordering(182) 00:14:01.600 fused_ordering(183) 00:14:01.600 fused_ordering(184) 00:14:01.600 fused_ordering(185) 00:14:01.600 fused_ordering(186) 00:14:01.600 fused_ordering(187) 00:14:01.600 fused_ordering(188) 00:14:01.600 fused_ordering(189) 00:14:01.600 fused_ordering(190) 00:14:01.600 fused_ordering(191) 00:14:01.600 fused_ordering(192) 00:14:01.600 fused_ordering(193) 00:14:01.600 fused_ordering(194) 00:14:01.600 fused_ordering(195) 00:14:01.600 fused_ordering(196) 00:14:01.600 fused_ordering(197) 00:14:01.600 fused_ordering(198) 00:14:01.600 fused_ordering(199) 00:14:01.600 fused_ordering(200) 00:14:01.600 fused_ordering(201) 00:14:01.600 fused_ordering(202) 00:14:01.600 fused_ordering(203) 00:14:01.600 fused_ordering(204) 00:14:01.600 fused_ordering(205) 00:14:01.860 fused_ordering(206) 00:14:01.860 fused_ordering(207) 00:14:01.860 fused_ordering(208) 00:14:01.860 fused_ordering(209) 00:14:01.860 fused_ordering(210) 00:14:01.860 fused_ordering(211) 00:14:01.860 fused_ordering(212) 00:14:01.860 fused_ordering(213) 00:14:01.860 fused_ordering(214) 00:14:01.860 fused_ordering(215) 00:14:01.860 fused_ordering(216) 00:14:01.860 fused_ordering(217) 00:14:01.860 fused_ordering(218) 00:14:01.860 fused_ordering(219) 00:14:01.860 fused_ordering(220) 00:14:01.860 fused_ordering(221) 00:14:01.860 fused_ordering(222) 00:14:01.860 fused_ordering(223) 00:14:01.860 fused_ordering(224) 00:14:01.860 fused_ordering(225) 00:14:01.860 fused_ordering(226) 00:14:01.860 fused_ordering(227) 00:14:01.860 fused_ordering(228) 00:14:01.860 fused_ordering(229) 00:14:01.860 fused_ordering(230) 00:14:01.860 fused_ordering(231) 00:14:01.860 fused_ordering(232) 00:14:01.860 fused_ordering(233) 00:14:01.860 fused_ordering(234) 00:14:01.860 fused_ordering(235) 00:14:01.860 fused_ordering(236) 00:14:01.860 fused_ordering(237) 00:14:01.860 fused_ordering(238) 00:14:01.860 fused_ordering(239) 00:14:01.860 fused_ordering(240) 00:14:01.860 fused_ordering(241) 00:14:01.860 fused_ordering(242) 00:14:01.860 fused_ordering(243) 00:14:01.860 fused_ordering(244) 00:14:01.860 fused_ordering(245) 00:14:01.860 fused_ordering(246) 00:14:01.860 fused_ordering(247) 00:14:01.860 fused_ordering(248) 00:14:01.860 fused_ordering(249) 00:14:01.860 fused_ordering(250) 00:14:01.860 fused_ordering(251) 00:14:01.860 fused_ordering(252) 00:14:01.860 fused_ordering(253) 00:14:01.860 fused_ordering(254) 00:14:01.860 fused_ordering(255) 00:14:01.860 fused_ordering(256) 00:14:01.860 fused_ordering(257) 00:14:01.860 fused_ordering(258) 00:14:01.860 fused_ordering(259) 00:14:01.860 fused_ordering(260) 00:14:01.860 fused_ordering(261) 00:14:01.860 fused_ordering(262) 00:14:01.860 fused_ordering(263) 00:14:01.860 fused_ordering(264) 00:14:01.860 fused_ordering(265) 00:14:01.860 fused_ordering(266) 00:14:01.860 fused_ordering(267) 00:14:01.860 fused_ordering(268) 00:14:01.860 fused_ordering(269) 00:14:01.860 fused_ordering(270) 00:14:01.860 fused_ordering(271) 00:14:01.860 fused_ordering(272) 00:14:01.860 fused_ordering(273) 00:14:01.860 fused_ordering(274) 00:14:01.860 fused_ordering(275) 00:14:01.860 fused_ordering(276) 00:14:01.860 fused_ordering(277) 00:14:01.860 fused_ordering(278) 00:14:01.860 fused_ordering(279) 00:14:01.860 fused_ordering(280) 00:14:01.860 fused_ordering(281) 00:14:01.860 fused_ordering(282) 00:14:01.860 fused_ordering(283) 00:14:01.860 fused_ordering(284) 00:14:01.860 fused_ordering(285) 00:14:01.860 fused_ordering(286) 00:14:01.860 fused_ordering(287) 00:14:01.860 fused_ordering(288) 00:14:01.860 fused_ordering(289) 00:14:01.860 fused_ordering(290) 00:14:01.860 fused_ordering(291) 00:14:01.860 fused_ordering(292) 00:14:01.860 fused_ordering(293) 00:14:01.860 fused_ordering(294) 00:14:01.860 fused_ordering(295) 00:14:01.860 fused_ordering(296) 00:14:01.860 fused_ordering(297) 00:14:01.860 fused_ordering(298) 00:14:01.860 fused_ordering(299) 00:14:01.860 fused_ordering(300) 00:14:01.860 fused_ordering(301) 00:14:01.860 fused_ordering(302) 00:14:01.860 fused_ordering(303) 00:14:01.860 fused_ordering(304) 00:14:01.860 fused_ordering(305) 00:14:01.860 fused_ordering(306) 00:14:01.860 fused_ordering(307) 00:14:01.860 fused_ordering(308) 00:14:01.860 fused_ordering(309) 00:14:01.860 fused_ordering(310) 00:14:01.860 fused_ordering(311) 00:14:01.860 fused_ordering(312) 00:14:01.860 fused_ordering(313) 00:14:01.860 fused_ordering(314) 00:14:01.860 fused_ordering(315) 00:14:01.860 fused_ordering(316) 00:14:01.860 fused_ordering(317) 00:14:01.860 fused_ordering(318) 00:14:01.860 fused_ordering(319) 00:14:01.860 fused_ordering(320) 00:14:01.860 fused_ordering(321) 00:14:01.860 fused_ordering(322) 00:14:01.860 fused_ordering(323) 00:14:01.860 fused_ordering(324) 00:14:01.860 fused_ordering(325) 00:14:01.860 fused_ordering(326) 00:14:01.860 fused_ordering(327) 00:14:01.860 fused_ordering(328) 00:14:01.860 fused_ordering(329) 00:14:01.860 fused_ordering(330) 00:14:01.860 fused_ordering(331) 00:14:01.860 fused_ordering(332) 00:14:01.860 fused_ordering(333) 00:14:01.860 fused_ordering(334) 00:14:01.860 fused_ordering(335) 00:14:01.860 fused_ordering(336) 00:14:01.860 fused_ordering(337) 00:14:01.860 fused_ordering(338) 00:14:01.860 fused_ordering(339) 00:14:01.860 fused_ordering(340) 00:14:01.860 fused_ordering(341) 00:14:01.860 fused_ordering(342) 00:14:01.860 fused_ordering(343) 00:14:01.860 fused_ordering(344) 00:14:01.860 fused_ordering(345) 00:14:01.860 fused_ordering(346) 00:14:01.861 fused_ordering(347) 00:14:01.861 fused_ordering(348) 00:14:01.861 fused_ordering(349) 00:14:01.861 fused_ordering(350) 00:14:01.861 fused_ordering(351) 00:14:01.861 fused_ordering(352) 00:14:01.861 fused_ordering(353) 00:14:01.861 fused_ordering(354) 00:14:01.861 fused_ordering(355) 00:14:01.861 fused_ordering(356) 00:14:01.861 fused_ordering(357) 00:14:01.861 fused_ordering(358) 00:14:01.861 fused_ordering(359) 00:14:01.861 fused_ordering(360) 00:14:01.861 fused_ordering(361) 00:14:01.861 fused_ordering(362) 00:14:01.861 fused_ordering(363) 00:14:01.861 fused_ordering(364) 00:14:01.861 fused_ordering(365) 00:14:01.861 fused_ordering(366) 00:14:01.861 fused_ordering(367) 00:14:01.861 fused_ordering(368) 00:14:01.861 fused_ordering(369) 00:14:01.861 fused_ordering(370) 00:14:01.861 fused_ordering(371) 00:14:01.861 fused_ordering(372) 00:14:01.861 fused_ordering(373) 00:14:01.861 fused_ordering(374) 00:14:01.861 fused_ordering(375) 00:14:01.861 fused_ordering(376) 00:14:01.861 fused_ordering(377) 00:14:01.861 fused_ordering(378) 00:14:01.861 fused_ordering(379) 00:14:01.861 fused_ordering(380) 00:14:01.861 fused_ordering(381) 00:14:01.861 fused_ordering(382) 00:14:01.861 fused_ordering(383) 00:14:01.861 fused_ordering(384) 00:14:01.861 fused_ordering(385) 00:14:01.861 fused_ordering(386) 00:14:01.861 fused_ordering(387) 00:14:01.861 fused_ordering(388) 00:14:01.861 fused_ordering(389) 00:14:01.861 fused_ordering(390) 00:14:01.861 fused_ordering(391) 00:14:01.861 fused_ordering(392) 00:14:01.861 fused_ordering(393) 00:14:01.861 fused_ordering(394) 00:14:01.861 fused_ordering(395) 00:14:01.861 fused_ordering(396) 00:14:01.861 fused_ordering(397) 00:14:01.861 fused_ordering(398) 00:14:01.861 fused_ordering(399) 00:14:01.861 fused_ordering(400) 00:14:01.861 fused_ordering(401) 00:14:01.861 fused_ordering(402) 00:14:01.861 fused_ordering(403) 00:14:01.861 fused_ordering(404) 00:14:01.861 fused_ordering(405) 00:14:01.861 fused_ordering(406) 00:14:01.861 fused_ordering(407) 00:14:01.861 fused_ordering(408) 00:14:01.861 fused_ordering(409) 00:14:01.861 fused_ordering(410) 00:14:02.120 fused_ordering(411) 00:14:02.120 fused_ordering(412) 00:14:02.120 fused_ordering(413) 00:14:02.120 fused_ordering(414) 00:14:02.120 fused_ordering(415) 00:14:02.121 fused_ordering(416) 00:14:02.121 fused_ordering(417) 00:14:02.121 fused_ordering(418) 00:14:02.121 fused_ordering(419) 00:14:02.121 fused_ordering(420) 00:14:02.121 fused_ordering(421) 00:14:02.121 fused_ordering(422) 00:14:02.121 fused_ordering(423) 00:14:02.121 fused_ordering(424) 00:14:02.121 fused_ordering(425) 00:14:02.121 fused_ordering(426) 00:14:02.121 fused_ordering(427) 00:14:02.121 fused_ordering(428) 00:14:02.121 fused_ordering(429) 00:14:02.121 fused_ordering(430) 00:14:02.121 fused_ordering(431) 00:14:02.121 fused_ordering(432) 00:14:02.121 fused_ordering(433) 00:14:02.121 fused_ordering(434) 00:14:02.121 fused_ordering(435) 00:14:02.121 fused_ordering(436) 00:14:02.121 fused_ordering(437) 00:14:02.121 fused_ordering(438) 00:14:02.121 fused_ordering(439) 00:14:02.121 fused_ordering(440) 00:14:02.121 fused_ordering(441) 00:14:02.121 fused_ordering(442) 00:14:02.121 fused_ordering(443) 00:14:02.121 fused_ordering(444) 00:14:02.121 fused_ordering(445) 00:14:02.121 fused_ordering(446) 00:14:02.121 fused_ordering(447) 00:14:02.121 fused_ordering(448) 00:14:02.121 fused_ordering(449) 00:14:02.121 fused_ordering(450) 00:14:02.121 fused_ordering(451) 00:14:02.121 fused_ordering(452) 00:14:02.121 fused_ordering(453) 00:14:02.121 fused_ordering(454) 00:14:02.121 fused_ordering(455) 00:14:02.121 fused_ordering(456) 00:14:02.121 fused_ordering(457) 00:14:02.121 fused_ordering(458) 00:14:02.121 fused_ordering(459) 00:14:02.121 fused_ordering(460) 00:14:02.121 fused_ordering(461) 00:14:02.121 fused_ordering(462) 00:14:02.121 fused_ordering(463) 00:14:02.121 fused_ordering(464) 00:14:02.121 fused_ordering(465) 00:14:02.121 fused_ordering(466) 00:14:02.121 fused_ordering(467) 00:14:02.121 fused_ordering(468) 00:14:02.121 fused_ordering(469) 00:14:02.121 fused_ordering(470) 00:14:02.121 fused_ordering(471) 00:14:02.121 fused_ordering(472) 00:14:02.121 fused_ordering(473) 00:14:02.121 fused_ordering(474) 00:14:02.121 fused_ordering(475) 00:14:02.121 fused_ordering(476) 00:14:02.121 fused_ordering(477) 00:14:02.121 fused_ordering(478) 00:14:02.121 fused_ordering(479) 00:14:02.121 fused_ordering(480) 00:14:02.121 fused_ordering(481) 00:14:02.121 fused_ordering(482) 00:14:02.121 fused_ordering(483) 00:14:02.121 fused_ordering(484) 00:14:02.121 fused_ordering(485) 00:14:02.121 fused_ordering(486) 00:14:02.121 fused_ordering(487) 00:14:02.121 fused_ordering(488) 00:14:02.121 fused_ordering(489) 00:14:02.121 fused_ordering(490) 00:14:02.121 fused_ordering(491) 00:14:02.121 fused_ordering(492) 00:14:02.121 fused_ordering(493) 00:14:02.121 fused_ordering(494) 00:14:02.121 fused_ordering(495) 00:14:02.121 fused_ordering(496) 00:14:02.121 fused_ordering(497) 00:14:02.121 fused_ordering(498) 00:14:02.121 fused_ordering(499) 00:14:02.121 fused_ordering(500) 00:14:02.121 fused_ordering(501) 00:14:02.121 fused_ordering(502) 00:14:02.121 fused_ordering(503) 00:14:02.121 fused_ordering(504) 00:14:02.121 fused_ordering(505) 00:14:02.121 fused_ordering(506) 00:14:02.121 fused_ordering(507) 00:14:02.121 fused_ordering(508) 00:14:02.121 fused_ordering(509) 00:14:02.121 fused_ordering(510) 00:14:02.121 fused_ordering(511) 00:14:02.121 fused_ordering(512) 00:14:02.121 fused_ordering(513) 00:14:02.121 fused_ordering(514) 00:14:02.121 fused_ordering(515) 00:14:02.121 fused_ordering(516) 00:14:02.121 fused_ordering(517) 00:14:02.121 fused_ordering(518) 00:14:02.121 fused_ordering(519) 00:14:02.121 fused_ordering(520) 00:14:02.121 fused_ordering(521) 00:14:02.121 fused_ordering(522) 00:14:02.121 fused_ordering(523) 00:14:02.121 fused_ordering(524) 00:14:02.121 fused_ordering(525) 00:14:02.121 fused_ordering(526) 00:14:02.121 fused_ordering(527) 00:14:02.121 fused_ordering(528) 00:14:02.121 fused_ordering(529) 00:14:02.121 fused_ordering(530) 00:14:02.121 fused_ordering(531) 00:14:02.121 fused_ordering(532) 00:14:02.121 fused_ordering(533) 00:14:02.121 fused_ordering(534) 00:14:02.121 fused_ordering(535) 00:14:02.121 fused_ordering(536) 00:14:02.121 fused_ordering(537) 00:14:02.121 fused_ordering(538) 00:14:02.121 fused_ordering(539) 00:14:02.121 fused_ordering(540) 00:14:02.121 fused_ordering(541) 00:14:02.121 fused_ordering(542) 00:14:02.121 fused_ordering(543) 00:14:02.121 fused_ordering(544) 00:14:02.121 fused_ordering(545) 00:14:02.121 fused_ordering(546) 00:14:02.121 fused_ordering(547) 00:14:02.121 fused_ordering(548) 00:14:02.121 fused_ordering(549) 00:14:02.121 fused_ordering(550) 00:14:02.121 fused_ordering(551) 00:14:02.121 fused_ordering(552) 00:14:02.121 fused_ordering(553) 00:14:02.121 fused_ordering(554) 00:14:02.121 fused_ordering(555) 00:14:02.121 fused_ordering(556) 00:14:02.121 fused_ordering(557) 00:14:02.121 fused_ordering(558) 00:14:02.121 fused_ordering(559) 00:14:02.121 fused_ordering(560) 00:14:02.121 fused_ordering(561) 00:14:02.121 fused_ordering(562) 00:14:02.121 fused_ordering(563) 00:14:02.121 fused_ordering(564) 00:14:02.121 fused_ordering(565) 00:14:02.121 fused_ordering(566) 00:14:02.121 fused_ordering(567) 00:14:02.121 fused_ordering(568) 00:14:02.121 fused_ordering(569) 00:14:02.121 fused_ordering(570) 00:14:02.121 fused_ordering(571) 00:14:02.121 fused_ordering(572) 00:14:02.121 fused_ordering(573) 00:14:02.121 fused_ordering(574) 00:14:02.121 fused_ordering(575) 00:14:02.121 fused_ordering(576) 00:14:02.121 fused_ordering(577) 00:14:02.121 fused_ordering(578) 00:14:02.121 fused_ordering(579) 00:14:02.121 fused_ordering(580) 00:14:02.121 fused_ordering(581) 00:14:02.121 fused_ordering(582) 00:14:02.121 fused_ordering(583) 00:14:02.121 fused_ordering(584) 00:14:02.121 fused_ordering(585) 00:14:02.121 fused_ordering(586) 00:14:02.121 fused_ordering(587) 00:14:02.121 fused_ordering(588) 00:14:02.121 fused_ordering(589) 00:14:02.121 fused_ordering(590) 00:14:02.121 fused_ordering(591) 00:14:02.121 fused_ordering(592) 00:14:02.121 fused_ordering(593) 00:14:02.121 fused_ordering(594) 00:14:02.121 fused_ordering(595) 00:14:02.121 fused_ordering(596) 00:14:02.121 fused_ordering(597) 00:14:02.121 fused_ordering(598) 00:14:02.121 fused_ordering(599) 00:14:02.121 fused_ordering(600) 00:14:02.121 fused_ordering(601) 00:14:02.122 fused_ordering(602) 00:14:02.122 fused_ordering(603) 00:14:02.122 fused_ordering(604) 00:14:02.122 fused_ordering(605) 00:14:02.122 fused_ordering(606) 00:14:02.122 fused_ordering(607) 00:14:02.122 fused_ordering(608) 00:14:02.122 fused_ordering(609) 00:14:02.122 fused_ordering(610) 00:14:02.122 fused_ordering(611) 00:14:02.122 fused_ordering(612) 00:14:02.122 fused_ordering(613) 00:14:02.122 fused_ordering(614) 00:14:02.122 fused_ordering(615) 00:14:02.381 fused_ordering(616) 00:14:02.381 fused_ordering(617) 00:14:02.381 fused_ordering(618) 00:14:02.381 fused_ordering(619) 00:14:02.381 fused_ordering(620) 00:14:02.381 fused_ordering(621) 00:14:02.381 fused_ordering(622) 00:14:02.381 fused_ordering(623) 00:14:02.381 fused_ordering(624) 00:14:02.381 fused_ordering(625) 00:14:02.381 fused_ordering(626) 00:14:02.381 fused_ordering(627) 00:14:02.381 fused_ordering(628) 00:14:02.381 fused_ordering(629) 00:14:02.381 fused_ordering(630) 00:14:02.382 fused_ordering(631) 00:14:02.382 fused_ordering(632) 00:14:02.382 fused_ordering(633) 00:14:02.382 fused_ordering(634) 00:14:02.382 fused_ordering(635) 00:14:02.382 fused_ordering(636) 00:14:02.382 fused_ordering(637) 00:14:02.382 fused_ordering(638) 00:14:02.382 fused_ordering(639) 00:14:02.382 fused_ordering(640) 00:14:02.382 fused_ordering(641) 00:14:02.382 fused_ordering(642) 00:14:02.382 fused_ordering(643) 00:14:02.382 fused_ordering(644) 00:14:02.382 fused_ordering(645) 00:14:02.382 fused_ordering(646) 00:14:02.382 fused_ordering(647) 00:14:02.382 fused_ordering(648) 00:14:02.382 fused_ordering(649) 00:14:02.382 fused_ordering(650) 00:14:02.382 fused_ordering(651) 00:14:02.382 fused_ordering(652) 00:14:02.382 fused_ordering(653) 00:14:02.382 fused_ordering(654) 00:14:02.382 fused_ordering(655) 00:14:02.382 fused_ordering(656) 00:14:02.382 fused_ordering(657) 00:14:02.382 fused_ordering(658) 00:14:02.382 fused_ordering(659) 00:14:02.382 fused_ordering(660) 00:14:02.382 fused_ordering(661) 00:14:02.382 fused_ordering(662) 00:14:02.382 fused_ordering(663) 00:14:02.382 fused_ordering(664) 00:14:02.382 fused_ordering(665) 00:14:02.382 fused_ordering(666) 00:14:02.382 fused_ordering(667) 00:14:02.382 fused_ordering(668) 00:14:02.382 fused_ordering(669) 00:14:02.382 fused_ordering(670) 00:14:02.382 fused_ordering(671) 00:14:02.382 fused_ordering(672) 00:14:02.382 fused_ordering(673) 00:14:02.382 fused_ordering(674) 00:14:02.382 fused_ordering(675) 00:14:02.382 fused_ordering(676) 00:14:02.382 fused_ordering(677) 00:14:02.382 fused_ordering(678) 00:14:02.382 fused_ordering(679) 00:14:02.382 fused_ordering(680) 00:14:02.382 fused_ordering(681) 00:14:02.382 fused_ordering(682) 00:14:02.382 fused_ordering(683) 00:14:02.382 fused_ordering(684) 00:14:02.382 fused_ordering(685) 00:14:02.382 fused_ordering(686) 00:14:02.382 fused_ordering(687) 00:14:02.382 fused_ordering(688) 00:14:02.382 fused_ordering(689) 00:14:02.382 fused_ordering(690) 00:14:02.382 fused_ordering(691) 00:14:02.382 fused_ordering(692) 00:14:02.382 fused_ordering(693) 00:14:02.382 fused_ordering(694) 00:14:02.382 fused_ordering(695) 00:14:02.382 fused_ordering(696) 00:14:02.382 fused_ordering(697) 00:14:02.382 fused_ordering(698) 00:14:02.382 fused_ordering(699) 00:14:02.382 fused_ordering(700) 00:14:02.382 fused_ordering(701) 00:14:02.382 fused_ordering(702) 00:14:02.382 fused_ordering(703) 00:14:02.382 fused_ordering(704) 00:14:02.382 fused_ordering(705) 00:14:02.382 fused_ordering(706) 00:14:02.382 fused_ordering(707) 00:14:02.382 fused_ordering(708) 00:14:02.382 fused_ordering(709) 00:14:02.382 fused_ordering(710) 00:14:02.382 fused_ordering(711) 00:14:02.382 fused_ordering(712) 00:14:02.382 fused_ordering(713) 00:14:02.382 fused_ordering(714) 00:14:02.382 fused_ordering(715) 00:14:02.382 fused_ordering(716) 00:14:02.382 fused_ordering(717) 00:14:02.382 fused_ordering(718) 00:14:02.382 fused_ordering(719) 00:14:02.382 fused_ordering(720) 00:14:02.382 fused_ordering(721) 00:14:02.382 fused_ordering(722) 00:14:02.382 fused_ordering(723) 00:14:02.382 fused_ordering(724) 00:14:02.382 fused_ordering(725) 00:14:02.382 fused_ordering(726) 00:14:02.382 fused_ordering(727) 00:14:02.382 fused_ordering(728) 00:14:02.382 fused_ordering(729) 00:14:02.382 fused_ordering(730) 00:14:02.382 fused_ordering(731) 00:14:02.382 fused_ordering(732) 00:14:02.382 fused_ordering(733) 00:14:02.382 fused_ordering(734) 00:14:02.382 fused_ordering(735) 00:14:02.382 fused_ordering(736) 00:14:02.382 fused_ordering(737) 00:14:02.382 fused_ordering(738) 00:14:02.382 fused_ordering(739) 00:14:02.382 fused_ordering(740) 00:14:02.382 fused_ordering(741) 00:14:02.382 fused_ordering(742) 00:14:02.382 fused_ordering(743) 00:14:02.382 fused_ordering(744) 00:14:02.382 fused_ordering(745) 00:14:02.382 fused_ordering(746) 00:14:02.382 fused_ordering(747) 00:14:02.382 fused_ordering(748) 00:14:02.382 fused_ordering(749) 00:14:02.382 fused_ordering(750) 00:14:02.382 fused_ordering(751) 00:14:02.382 fused_ordering(752) 00:14:02.382 fused_ordering(753) 00:14:02.382 fused_ordering(754) 00:14:02.382 fused_ordering(755) 00:14:02.382 fused_ordering(756) 00:14:02.382 fused_ordering(757) 00:14:02.382 fused_ordering(758) 00:14:02.382 fused_ordering(759) 00:14:02.382 fused_ordering(760) 00:14:02.382 fused_ordering(761) 00:14:02.382 fused_ordering(762) 00:14:02.382 fused_ordering(763) 00:14:02.382 fused_ordering(764) 00:14:02.382 fused_ordering(765) 00:14:02.382 fused_ordering(766) 00:14:02.382 fused_ordering(767) 00:14:02.382 fused_ordering(768) 00:14:02.382 fused_ordering(769) 00:14:02.382 fused_ordering(770) 00:14:02.382 fused_ordering(771) 00:14:02.382 fused_ordering(772) 00:14:02.382 fused_ordering(773) 00:14:02.382 fused_ordering(774) 00:14:02.382 fused_ordering(775) 00:14:02.382 fused_ordering(776) 00:14:02.382 fused_ordering(777) 00:14:02.382 fused_ordering(778) 00:14:02.382 fused_ordering(779) 00:14:02.382 fused_ordering(780) 00:14:02.382 fused_ordering(781) 00:14:02.382 fused_ordering(782) 00:14:02.382 fused_ordering(783) 00:14:02.382 fused_ordering(784) 00:14:02.382 fused_ordering(785) 00:14:02.382 fused_ordering(786) 00:14:02.382 fused_ordering(787) 00:14:02.382 fused_ordering(788) 00:14:02.382 fused_ordering(789) 00:14:02.382 fused_ordering(790) 00:14:02.382 fused_ordering(791) 00:14:02.382 fused_ordering(792) 00:14:02.382 fused_ordering(793) 00:14:02.382 fused_ordering(794) 00:14:02.382 fused_ordering(795) 00:14:02.382 fused_ordering(796) 00:14:02.382 fused_ordering(797) 00:14:02.382 fused_ordering(798) 00:14:02.382 fused_ordering(799) 00:14:02.382 fused_ordering(800) 00:14:02.382 fused_ordering(801) 00:14:02.382 fused_ordering(802) 00:14:02.382 fused_ordering(803) 00:14:02.382 fused_ordering(804) 00:14:02.382 fused_ordering(805) 00:14:02.382 fused_ordering(806) 00:14:02.382 fused_ordering(807) 00:14:02.382 fused_ordering(808) 00:14:02.382 fused_ordering(809) 00:14:02.382 fused_ordering(810) 00:14:02.382 fused_ordering(811) 00:14:02.382 fused_ordering(812) 00:14:02.382 fused_ordering(813) 00:14:02.382 fused_ordering(814) 00:14:02.382 fused_ordering(815) 00:14:02.383 fused_ordering(816) 00:14:02.383 fused_ordering(817) 00:14:02.383 fused_ordering(818) 00:14:02.383 fused_ordering(819) 00:14:02.383 fused_ordering(820) 00:14:02.952 fused_ordering(821) 00:14:02.952 fused_ordering(822) 00:14:02.952 fused_ordering(823) 00:14:02.952 fused_ordering(824) 00:14:02.952 fused_ordering(825) 00:14:02.952 fused_ordering(826) 00:14:02.952 fused_ordering(827) 00:14:02.952 fused_ordering(828) 00:14:02.952 fused_ordering(829) 00:14:02.952 fused_ordering(830) 00:14:02.952 fused_ordering(831) 00:14:02.952 fused_ordering(832) 00:14:02.952 fused_ordering(833) 00:14:02.952 fused_ordering(834) 00:14:02.952 fused_ordering(835) 00:14:02.952 fused_ordering(836) 00:14:02.952 fused_ordering(837) 00:14:02.952 fused_ordering(838) 00:14:02.952 fused_ordering(839) 00:14:02.952 fused_ordering(840) 00:14:02.952 fused_ordering(841) 00:14:02.952 fused_ordering(842) 00:14:02.952 fused_ordering(843) 00:14:02.952 fused_ordering(844) 00:14:02.952 fused_ordering(845) 00:14:02.952 fused_ordering(846) 00:14:02.952 fused_ordering(847) 00:14:02.952 fused_ordering(848) 00:14:02.952 fused_ordering(849) 00:14:02.952 fused_ordering(850) 00:14:02.952 fused_ordering(851) 00:14:02.952 fused_ordering(852) 00:14:02.952 fused_ordering(853) 00:14:02.952 fused_ordering(854) 00:14:02.952 fused_ordering(855) 00:14:02.952 fused_ordering(856) 00:14:02.952 fused_ordering(857) 00:14:02.952 fused_ordering(858) 00:14:02.952 fused_ordering(859) 00:14:02.952 fused_ordering(860) 00:14:02.952 fused_ordering(861) 00:14:02.952 fused_ordering(862) 00:14:02.952 fused_ordering(863) 00:14:02.952 fused_ordering(864) 00:14:02.952 fused_ordering(865) 00:14:02.952 fused_ordering(866) 00:14:02.952 fused_ordering(867) 00:14:02.952 fused_ordering(868) 00:14:02.952 fused_ordering(869) 00:14:02.952 fused_ordering(870) 00:14:02.952 fused_ordering(871) 00:14:02.952 fused_ordering(872) 00:14:02.952 fused_ordering(873) 00:14:02.952 fused_ordering(874) 00:14:02.952 fused_ordering(875) 00:14:02.952 fused_ordering(876) 00:14:02.952 fused_ordering(877) 00:14:02.952 fused_ordering(878) 00:14:02.952 fused_ordering(879) 00:14:02.952 fused_ordering(880) 00:14:02.952 fused_ordering(881) 00:14:02.952 fused_ordering(882) 00:14:02.952 fused_ordering(883) 00:14:02.952 fused_ordering(884) 00:14:02.952 fused_ordering(885) 00:14:02.952 fused_ordering(886) 00:14:02.952 fused_ordering(887) 00:14:02.952 fused_ordering(888) 00:14:02.952 fused_ordering(889) 00:14:02.952 fused_ordering(890) 00:14:02.952 fused_ordering(891) 00:14:02.952 fused_ordering(892) 00:14:02.952 fused_ordering(893) 00:14:02.952 fused_ordering(894) 00:14:02.952 fused_ordering(895) 00:14:02.952 fused_ordering(896) 00:14:02.952 fused_ordering(897) 00:14:02.952 fused_ordering(898) 00:14:02.952 fused_ordering(899) 00:14:02.952 fused_ordering(900) 00:14:02.952 fused_ordering(901) 00:14:02.952 fused_ordering(902) 00:14:02.952 fused_ordering(903) 00:14:02.952 fused_ordering(904) 00:14:02.952 fused_ordering(905) 00:14:02.952 fused_ordering(906) 00:14:02.952 fused_ordering(907) 00:14:02.952 fused_ordering(908) 00:14:02.952 fused_ordering(909) 00:14:02.952 fused_ordering(910) 00:14:02.952 fused_ordering(911) 00:14:02.952 fused_ordering(912) 00:14:02.952 fused_ordering(913) 00:14:02.952 fused_ordering(914) 00:14:02.953 fused_ordering(915) 00:14:02.953 fused_ordering(916) 00:14:02.953 fused_ordering(917) 00:14:02.953 fused_ordering(918) 00:14:02.953 fused_ordering(919) 00:14:02.953 fused_ordering(920) 00:14:02.953 fused_ordering(921) 00:14:02.953 fused_ordering(922) 00:14:02.953 fused_ordering(923) 00:14:02.953 fused_ordering(924) 00:14:02.953 fused_ordering(925) 00:14:02.953 fused_ordering(926) 00:14:02.953 fused_ordering(927) 00:14:02.953 fused_ordering(928) 00:14:02.953 fused_ordering(929) 00:14:02.953 fused_ordering(930) 00:14:02.953 fused_ordering(931) 00:14:02.953 fused_ordering(932) 00:14:02.953 fused_ordering(933) 00:14:02.953 fused_ordering(934) 00:14:02.953 fused_ordering(935) 00:14:02.953 fused_ordering(936) 00:14:02.953 fused_ordering(937) 00:14:02.953 fused_ordering(938) 00:14:02.953 fused_ordering(939) 00:14:02.953 fused_ordering(940) 00:14:02.953 fused_ordering(941) 00:14:02.953 fused_ordering(942) 00:14:02.953 fused_ordering(943) 00:14:02.953 fused_ordering(944) 00:14:02.953 fused_ordering(945) 00:14:02.953 fused_ordering(946) 00:14:02.953 fused_ordering(947) 00:14:02.953 fused_ordering(948) 00:14:02.953 fused_ordering(949) 00:14:02.953 fused_ordering(950) 00:14:02.953 fused_ordering(951) 00:14:02.953 fused_ordering(952) 00:14:02.953 fused_ordering(953) 00:14:02.953 fused_ordering(954) 00:14:02.953 fused_ordering(955) 00:14:02.953 fused_ordering(956) 00:14:02.953 fused_ordering(957) 00:14:02.953 fused_ordering(958) 00:14:02.953 fused_ordering(959) 00:14:02.953 fused_ordering(960) 00:14:02.953 fused_ordering(961) 00:14:02.953 fused_ordering(962) 00:14:02.953 fused_ordering(963) 00:14:02.953 fused_ordering(964) 00:14:02.953 fused_ordering(965) 00:14:02.953 fused_ordering(966) 00:14:02.953 fused_ordering(967) 00:14:02.953 fused_ordering(968) 00:14:02.953 fused_ordering(969) 00:14:02.953 fused_ordering(970) 00:14:02.953 fused_ordering(971) 00:14:02.953 fused_ordering(972) 00:14:02.953 fused_ordering(973) 00:14:02.953 fused_ordering(974) 00:14:02.953 fused_ordering(975) 00:14:02.953 fused_ordering(976) 00:14:02.953 fused_ordering(977) 00:14:02.953 fused_ordering(978) 00:14:02.953 fused_ordering(979) 00:14:02.953 fused_ordering(980) 00:14:02.953 fused_ordering(981) 00:14:02.953 fused_ordering(982) 00:14:02.953 fused_ordering(983) 00:14:02.953 fused_ordering(984) 00:14:02.953 fused_ordering(985) 00:14:02.953 fused_ordering(986) 00:14:02.953 fused_ordering(987) 00:14:02.953 fused_ordering(988) 00:14:02.953 fused_ordering(989) 00:14:02.953 fused_ordering(990) 00:14:02.953 fused_ordering(991) 00:14:02.953 fused_ordering(992) 00:14:02.953 fused_ordering(993) 00:14:02.953 fused_ordering(994) 00:14:02.953 fused_ordering(995) 00:14:02.953 fused_ordering(996) 00:14:02.953 fused_ordering(997) 00:14:02.953 fused_ordering(998) 00:14:02.953 fused_ordering(999) 00:14:02.953 fused_ordering(1000) 00:14:02.953 fused_ordering(1001) 00:14:02.953 fused_ordering(1002) 00:14:02.953 fused_ordering(1003) 00:14:02.953 fused_ordering(1004) 00:14:02.953 fused_ordering(1005) 00:14:02.953 fused_ordering(1006) 00:14:02.953 fused_ordering(1007) 00:14:02.953 fused_ordering(1008) 00:14:02.953 fused_ordering(1009) 00:14:02.953 fused_ordering(1010) 00:14:02.953 fused_ordering(1011) 00:14:02.953 fused_ordering(1012) 00:14:02.953 fused_ordering(1013) 00:14:02.953 fused_ordering(1014) 00:14:02.953 fused_ordering(1015) 00:14:02.953 fused_ordering(1016) 00:14:02.953 fused_ordering(1017) 00:14:02.953 fused_ordering(1018) 00:14:02.953 fused_ordering(1019) 00:14:02.953 fused_ordering(1020) 00:14:02.953 fused_ordering(1021) 00:14:02.953 fused_ordering(1022) 00:14:02.953 fused_ordering(1023) 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:02.953 rmmod nvme_tcp 00:14:02.953 rmmod nvme_fabrics 00:14:02.953 rmmod nvme_keyring 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1312645 ']' 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1312645 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1312645 ']' 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1312645 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1312645 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1312645' 00:14:02.953 killing process with pid 1312645 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1312645 00:14:02.953 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1312645 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.213 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.752 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:05.752 00:14:05.752 real 0m10.485s 00:14:05.752 user 0m5.160s 00:14:05.752 sys 0m5.644s 00:14:05.752 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.752 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.752 ************************************ 00:14:05.752 END TEST nvmf_fused_ordering 00:14:05.752 ************************************ 00:14:05.752 08:12:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:05.752 08:12:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:05.752 08:12:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.752 08:12:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:05.752 ************************************ 00:14:05.752 START TEST nvmf_ns_masking 00:14:05.752 ************************************ 00:14:05.752 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:05.752 * Looking for test storage... 00:14:05.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.752 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:05.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.753 --rc genhtml_branch_coverage=1 00:14:05.753 --rc genhtml_function_coverage=1 00:14:05.753 --rc genhtml_legend=1 00:14:05.753 --rc geninfo_all_blocks=1 00:14:05.753 --rc geninfo_unexecuted_blocks=1 00:14:05.753 00:14:05.753 ' 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:05.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.753 --rc genhtml_branch_coverage=1 00:14:05.753 --rc genhtml_function_coverage=1 00:14:05.753 --rc genhtml_legend=1 00:14:05.753 --rc geninfo_all_blocks=1 00:14:05.753 --rc geninfo_unexecuted_blocks=1 00:14:05.753 00:14:05.753 ' 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:05.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.753 --rc genhtml_branch_coverage=1 00:14:05.753 --rc genhtml_function_coverage=1 00:14:05.753 --rc genhtml_legend=1 00:14:05.753 --rc geninfo_all_blocks=1 00:14:05.753 --rc geninfo_unexecuted_blocks=1 00:14:05.753 00:14:05.753 ' 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:05.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.753 --rc genhtml_branch_coverage=1 00:14:05.753 --rc genhtml_function_coverage=1 00:14:05.753 --rc genhtml_legend=1 00:14:05.753 --rc geninfo_all_blocks=1 00:14:05.753 --rc geninfo_unexecuted_blocks=1 00:14:05.753 00:14:05.753 ' 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.753 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:05.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c43985da-9875-426e-8c1f-7431decb1072 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3b2b8b20-0ac6-42bf-a31e-07a48821091d 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=cb03df64-7308-4f0c-9f05-3a8ea07c9793 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:05.754 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:11.032 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:11.033 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:11.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:11.033 Found net devices under 0000:86:00.0: cvl_0_0 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:11.033 Found net devices under 0000:86:00.1: cvl_0_1 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:11.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:14:11.033 00:14:11.033 --- 10.0.0.2 ping statistics --- 00:14:11.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.033 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:14:11.033 00:14:11.033 --- 10.0.0.1 ping statistics --- 00:14:11.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.033 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:11.033 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1316475 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1316475 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1316475 ']' 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.292 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.292 [2024-11-28 08:12:53.385809] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:14:11.292 [2024-11-28 08:12:53.385856] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.292 [2024-11-28 08:12:53.452200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.292 [2024-11-28 08:12:53.493508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.292 [2024-11-28 08:12:53.493543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.292 [2024-11-28 08:12:53.493550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.292 [2024-11-28 08:12:53.493557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.292 [2024-11-28 08:12:53.493563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.292 [2024-11-28 08:12:53.494135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.550 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.550 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:11.550 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:11.550 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:11.550 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.550 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.550 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:11.550 [2024-11-28 08:12:53.808423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.809 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:11.809 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:11.809 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:11.809 Malloc1 00:14:11.809 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:12.067 Malloc2 00:14:12.067 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:12.325 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:12.584 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.584 [2024-11-28 08:12:54.796701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.584 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:12.584 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cb03df64-7308-4f0c-9f05-3a8ea07c9793 -a 10.0.0.2 -s 4420 -i 4 00:14:12.842 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.842 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:12.842 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.842 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:12.842 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:14.745 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:14.745 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:14.745 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.745 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:14.745 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.745 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:14.745 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:14.745 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:14.745 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:14.745 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:14.745 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:15.003 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.003 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.003 [ 0]:0x1 00:14:15.003 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.003 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.003 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a5109f30d9f6479fbfa2b7861e286f47 00:14:15.003 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a5109f30d9f6479fbfa2b7861e286f47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.003 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.262 [ 0]:0x1 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a5109f30d9f6479fbfa2b7861e286f47 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a5109f30d9f6479fbfa2b7861e286f47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.262 [ 1]:0x2 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c00d1bdfaaf34d789c554b9005c3b80e 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c00d1bdfaaf34d789c554b9005c3b80e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:15.262 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.525 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.784 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:15.784 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:15.784 08:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cb03df64-7308-4f0c-9f05-3a8ea07c9793 -a 10.0.0.2 -s 4420 -i 4 00:14:16.043 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:16.043 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:16.043 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.043 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:16.043 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:16.043 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.950 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.209 [ 0]:0x2 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c00d1bdfaaf34d789c554b9005c3b80e 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c00d1bdfaaf34d789c554b9005c3b80e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.209 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.469 [ 0]:0x1 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a5109f30d9f6479fbfa2b7861e286f47 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a5109f30d9f6479fbfa2b7861e286f47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.469 [ 1]:0x2 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c00d1bdfaaf34d789c554b9005c3b80e 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c00d1bdfaaf34d789c554b9005c3b80e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.469 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.729 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.729 [ 0]:0x2 00:14:18.730 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.730 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.730 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c00d1bdfaaf34d789c554b9005c3b80e 00:14:18.730 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c00d1bdfaaf34d789c554b9005c3b80e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.730 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:18.730 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.730 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.989 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:18.989 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cb03df64-7308-4f0c-9f05-3a8ea07c9793 -a 10.0.0.2 -s 4420 -i 4 00:14:19.249 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:19.249 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:19.249 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.249 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:19.249 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:19.249 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.154 [ 0]:0x1 00:14:21.154 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.155 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.155 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a5109f30d9f6479fbfa2b7861e286f47 00:14:21.155 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a5109f30d9f6479fbfa2b7861e286f47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.155 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:21.155 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.155 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:21.414 [ 1]:0x2 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c00d1bdfaaf34d789c554b9005c3b80e 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c00d1bdfaaf34d789c554b9005c3b80e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.414 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:21.415 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.415 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.415 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.415 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:21.674 [ 0]:0x2 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c00d1bdfaaf34d789c554b9005c3b80e 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c00d1bdfaaf34d789c554b9005c3b80e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.674 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:21.675 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:21.934 [2024-11-28 08:13:03.950839] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:21.934 request: 00:14:21.934 { 00:14:21.934 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.934 "nsid": 2, 00:14:21.934 "host": "nqn.2016-06.io.spdk:host1", 00:14:21.934 "method": "nvmf_ns_remove_host", 00:14:21.934 "req_id": 1 00:14:21.934 } 00:14:21.934 Got JSON-RPC error response 00:14:21.934 response: 00:14:21.934 { 00:14:21.934 "code": -32602, 00:14:21.934 "message": "Invalid parameters" 00:14:21.934 } 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.934 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:21.934 [ 0]:0x2 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c00d1bdfaaf34d789c554b9005c3b80e 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c00d1bdfaaf34d789c554b9005c3b80e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1318471 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1318471 /var/tmp/host.sock 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1318471 ']' 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:21.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.934 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.934 [2024-11-28 08:13:04.172039] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:14:21.934 [2024-11-28 08:13:04.172085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1318471 ] 00:14:22.194 [2024-11-28 08:13:04.234407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.194 [2024-11-28 08:13:04.275146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.453 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.453 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:22.453 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.453 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.714 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c43985da-9875-426e-8c1f-7431decb1072 00:14:22.714 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:22.714 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C43985DA9875426E8C1F7431DECB1072 -i 00:14:22.973 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3b2b8b20-0ac6-42bf-a31e-07a48821091d 00:14:22.973 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:22.973 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3B2B8B200AC642BFA31E07A48821091D -i 00:14:23.233 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.233 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:23.493 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:23.493 08:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:24.062 nvme0n1 00:14:24.062 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:24.062 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:24.062 nvme1n2 00:14:24.321 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:24.321 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:24.321 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:24.321 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:24.321 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:24.321 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:24.321 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:24.321 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:24.321 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:24.580 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c43985da-9875-426e-8c1f-7431decb1072 == \c\4\3\9\8\5\d\a\-\9\8\7\5\-\4\2\6\e\-\8\c\1\f\-\7\4\3\1\d\e\c\b\1\0\7\2 ]] 00:14:24.580 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:24.580 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:24.580 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:24.840 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3b2b8b20-0ac6-42bf-a31e-07a48821091d == \3\b\2\b\8\b\2\0\-\0\a\c\6\-\4\2\b\f\-\a\3\1\e\-\0\7\a\4\8\8\2\1\0\9\1\d ]] 00:14:24.840 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.099 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.099 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c43985da-9875-426e-8c1f-7431decb1072 00:14:25.099 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C43985DA9875426E8C1F7431DECB1072 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C43985DA9875426E8C1F7431DECB1072 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:25.100 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C43985DA9875426E8C1F7431DECB1072 00:14:25.359 [2024-11-28 08:13:07.532773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:25.359 [2024-11-28 08:13:07.532806] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:25.359 [2024-11-28 08:13:07.532814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.359 request: 00:14:25.359 { 00:14:25.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.359 "namespace": { 00:14:25.359 "bdev_name": "invalid", 00:14:25.359 "nsid": 1, 00:14:25.359 "nguid": "C43985DA9875426E8C1F7431DECB1072", 00:14:25.359 "no_auto_visible": false, 00:14:25.360 "hide_metadata": false 00:14:25.360 }, 00:14:25.360 "method": "nvmf_subsystem_add_ns", 00:14:25.360 "req_id": 1 00:14:25.360 } 00:14:25.360 Got JSON-RPC error response 00:14:25.360 response: 00:14:25.360 { 00:14:25.360 "code": -32602, 00:14:25.360 "message": "Invalid parameters" 00:14:25.360 } 00:14:25.360 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:25.360 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.360 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.360 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.360 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c43985da-9875-426e-8c1f-7431decb1072 00:14:25.360 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:25.360 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C43985DA9875426E8C1F7431DECB1072 -i 00:14:25.619 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:27.528 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:27.528 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:27.528 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1318471 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1318471 ']' 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1318471 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1318471 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1318471' 00:14:27.792 killing process with pid 1318471 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1318471 00:14:27.792 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1318471 00:14:28.074 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.361 rmmod nvme_tcp 00:14:28.361 rmmod nvme_fabrics 00:14:28.361 rmmod nvme_keyring 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1316475 ']' 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1316475 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1316475 ']' 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1316475 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1316475 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1316475' 00:14:28.361 killing process with pid 1316475 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1316475 00:14:28.361 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1316475 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.666 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.789 08:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.789 00:14:30.789 real 0m25.398s 00:14:30.789 user 0m30.554s 00:14:30.789 sys 0m6.824s 00:14:30.789 08:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.789 08:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:30.789 ************************************ 00:14:30.789 END TEST nvmf_ns_masking 00:14:30.789 ************************************ 00:14:30.789 08:13:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:30.789 08:13:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:30.789 08:13:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:30.789 08:13:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.789 08:13:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.789 ************************************ 00:14:30.789 START TEST nvmf_nvme_cli 00:14:30.789 ************************************ 00:14:30.789 08:13:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:31.106 * Looking for test storage... 00:14:31.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:31.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.106 --rc genhtml_branch_coverage=1 00:14:31.106 --rc genhtml_function_coverage=1 00:14:31.106 --rc genhtml_legend=1 00:14:31.106 --rc geninfo_all_blocks=1 00:14:31.106 --rc geninfo_unexecuted_blocks=1 00:14:31.106 00:14:31.106 ' 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:31.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.106 --rc genhtml_branch_coverage=1 00:14:31.106 --rc genhtml_function_coverage=1 00:14:31.106 --rc genhtml_legend=1 00:14:31.106 --rc geninfo_all_blocks=1 00:14:31.106 --rc geninfo_unexecuted_blocks=1 00:14:31.106 00:14:31.106 ' 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:31.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.106 --rc genhtml_branch_coverage=1 00:14:31.106 --rc genhtml_function_coverage=1 00:14:31.106 --rc genhtml_legend=1 00:14:31.106 --rc geninfo_all_blocks=1 00:14:31.106 --rc geninfo_unexecuted_blocks=1 00:14:31.106 00:14:31.106 ' 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:31.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.106 --rc genhtml_branch_coverage=1 00:14:31.106 --rc genhtml_function_coverage=1 00:14:31.106 --rc genhtml_legend=1 00:14:31.106 --rc geninfo_all_blocks=1 00:14:31.106 --rc geninfo_unexecuted_blocks=1 00:14:31.106 00:14:31.106 ' 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.106 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:31.107 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.394 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:36.395 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:36.395 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:36.395 Found net devices under 0000:86:00.0: cvl_0_0 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:36.395 Found net devices under 0000:86:00.1: cvl_0_1 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:36.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:14:36.395 00:14:36.395 --- 10.0.0.2 ping statistics --- 00:14:36.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.395 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:14:36.395 00:14:36.395 --- 10.0.0.1 ping statistics --- 00:14:36.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.395 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1322978 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1322978 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1322978 ']' 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.395 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.654 [2024-11-28 08:13:18.707353] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:14:36.654 [2024-11-28 08:13:18.707402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.654 [2024-11-28 08:13:18.773754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.654 [2024-11-28 08:13:18.817824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.654 [2024-11-28 08:13:18.817862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.654 [2024-11-28 08:13:18.817870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.654 [2024-11-28 08:13:18.817876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.654 [2024-11-28 08:13:18.817881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.654 [2024-11-28 08:13:18.819500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.654 [2024-11-28 08:13:18.819579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.654 [2024-11-28 08:13:18.819666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.654 [2024-11-28 08:13:18.819668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.655 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.655 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:36.655 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:36.655 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:36.655 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.914 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.914 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.914 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.914 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.914 [2024-11-28 08:13:18.962900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.914 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.914 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:36.914 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.914 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.914 Malloc0 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.914 Malloc1 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.914 [2024-11-28 08:13:19.061609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.914 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:37.174 00:14:37.174 Discovery Log Number of Records 2, Generation counter 2 00:14:37.174 =====Discovery Log Entry 0====== 00:14:37.174 trtype: tcp 00:14:37.174 adrfam: ipv4 00:14:37.174 subtype: current discovery subsystem 00:14:37.174 treq: not required 00:14:37.174 portid: 0 00:14:37.174 trsvcid: 4420 00:14:37.174 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:37.174 traddr: 10.0.0.2 00:14:37.174 eflags: explicit discovery connections, duplicate discovery information 00:14:37.174 sectype: none 00:14:37.174 =====Discovery Log Entry 1====== 00:14:37.174 trtype: tcp 00:14:37.174 adrfam: ipv4 00:14:37.174 subtype: nvme subsystem 00:14:37.174 treq: not required 00:14:37.174 portid: 0 00:14:37.174 trsvcid: 4420 00:14:37.174 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:37.174 traddr: 10.0.0.2 00:14:37.174 eflags: none 00:14:37.174 sectype: none 00:14:37.174 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:37.174 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:37.174 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:37.174 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.174 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:37.174 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:37.174 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.174 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:37.174 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:37.174 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:37.174 08:13:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:38.552 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:38.552 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:38.552 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.552 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:38.552 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:38.552 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:40.458 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:40.459 /dev/nvme0n2 ]] 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:40.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:40.459 rmmod nvme_tcp 00:14:40.459 rmmod nvme_fabrics 00:14:40.459 rmmod nvme_keyring 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1322978 ']' 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1322978 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1322978 ']' 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1322978 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1322978 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1322978' 00:14:40.459 killing process with pid 1322978 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1322978 00:14:40.459 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1322978 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.719 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.255 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:43.255 00:14:43.255 real 0m12.021s 00:14:43.255 user 0m17.864s 00:14:43.255 sys 0m4.756s 00:14:43.255 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.255 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.255 ************************************ 00:14:43.255 END TEST nvmf_nvme_cli 00:14:43.255 ************************************ 00:14:43.255 08:13:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:43.255 08:13:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:43.255 08:13:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:43.255 08:13:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.255 08:13:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:43.255 ************************************ 00:14:43.255 START TEST nvmf_vfio_user 00:14:43.255 ************************************ 00:14:43.255 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:43.255 * Looking for test storage... 00:14:43.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.255 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:43.255 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:43.255 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:43.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.256 --rc genhtml_branch_coverage=1 00:14:43.256 --rc genhtml_function_coverage=1 00:14:43.256 --rc genhtml_legend=1 00:14:43.256 --rc geninfo_all_blocks=1 00:14:43.256 --rc geninfo_unexecuted_blocks=1 00:14:43.256 00:14:43.256 ' 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:43.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.256 --rc genhtml_branch_coverage=1 00:14:43.256 --rc genhtml_function_coverage=1 00:14:43.256 --rc genhtml_legend=1 00:14:43.256 --rc geninfo_all_blocks=1 00:14:43.256 --rc geninfo_unexecuted_blocks=1 00:14:43.256 00:14:43.256 ' 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:43.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.256 --rc genhtml_branch_coverage=1 00:14:43.256 --rc genhtml_function_coverage=1 00:14:43.256 --rc genhtml_legend=1 00:14:43.256 --rc geninfo_all_blocks=1 00:14:43.256 --rc geninfo_unexecuted_blocks=1 00:14:43.256 00:14:43.256 ' 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:43.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.256 --rc genhtml_branch_coverage=1 00:14:43.256 --rc genhtml_function_coverage=1 00:14:43.256 --rc genhtml_legend=1 00:14:43.256 --rc geninfo_all_blocks=1 00:14:43.256 --rc geninfo_unexecuted_blocks=1 00:14:43.256 00:14:43.256 ' 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.256 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1324262 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1324262' 00:14:43.257 Process pid: 1324262 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1324262 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1324262 ']' 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:43.257 [2024-11-28 08:13:25.317242] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:14:43.257 [2024-11-28 08:13:25.317289] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.257 [2024-11-28 08:13:25.379105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.257 [2024-11-28 08:13:25.422095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.257 [2024-11-28 08:13:25.422133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.257 [2024-11-28 08:13:25.422141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.257 [2024-11-28 08:13:25.422148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.257 [2024-11-28 08:13:25.422153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.257 [2024-11-28 08:13:25.423647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.257 [2024-11-28 08:13:25.423749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.257 [2024-11-28 08:13:25.423850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.257 [2024-11-28 08:13:25.423851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:43.257 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:44.634 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:44.634 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:44.634 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:44.634 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:44.634 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:44.634 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:44.893 Malloc1 00:14:44.893 08:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:44.893 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:45.151 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:45.410 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:45.410 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:45.410 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:45.670 Malloc2 00:14:45.670 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:45.929 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:45.929 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:46.188 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:46.188 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:46.188 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:46.188 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:46.188 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:46.188 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:46.188 [2024-11-28 08:13:28.411470] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:14:46.188 [2024-11-28 08:13:28.411505] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324747 ] 00:14:46.188 [2024-11-28 08:13:28.452865] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:46.450 [2024-11-28 08:13:28.457240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.450 [2024-11-28 08:13:28.457262] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd8f2cfa000 00:14:46.450 [2024-11-28 08:13:28.458240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.450 [2024-11-28 08:13:28.459240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.450 [2024-11-28 08:13:28.460245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.450 [2024-11-28 08:13:28.461245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.450 [2024-11-28 08:13:28.462255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.450 [2024-11-28 08:13:28.463262] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.450 [2024-11-28 08:13:28.464274] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.450 [2024-11-28 08:13:28.465281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.450 [2024-11-28 08:13:28.466286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.450 [2024-11-28 08:13:28.466295] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd8f2cef000 00:14:46.450 [2024-11-28 08:13:28.467237] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.451 [2024-11-28 08:13:28.476846] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:46.451 [2024-11-28 08:13:28.476872] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:46.451 [2024-11-28 08:13:28.481372] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:46.451 [2024-11-28 08:13:28.481408] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:46.451 [2024-11-28 08:13:28.481476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:46.451 [2024-11-28 08:13:28.481491] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:46.451 [2024-11-28 08:13:28.481496] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:46.451 [2024-11-28 08:13:28.482371] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:46.451 [2024-11-28 08:13:28.482381] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:46.451 [2024-11-28 08:13:28.482388] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:46.451 [2024-11-28 08:13:28.483380] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:46.451 [2024-11-28 08:13:28.483387] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:46.451 [2024-11-28 08:13:28.483394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:46.451 [2024-11-28 08:13:28.484387] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:46.451 [2024-11-28 08:13:28.484395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:46.451 [2024-11-28 08:13:28.485396] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:46.451 [2024-11-28 08:13:28.485407] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:46.451 [2024-11-28 08:13:28.485412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:46.451 [2024-11-28 08:13:28.485418] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:46.451 [2024-11-28 08:13:28.485525] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:46.451 [2024-11-28 08:13:28.485530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:46.451 [2024-11-28 08:13:28.485535] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:46.451 [2024-11-28 08:13:28.486398] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:46.451 [2024-11-28 08:13:28.487402] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:46.451 [2024-11-28 08:13:28.488410] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:46.451 [2024-11-28 08:13:28.489409] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.451 [2024-11-28 08:13:28.489482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:46.451 [2024-11-28 08:13:28.490427] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:46.451 [2024-11-28 08:13:28.490435] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:46.451 [2024-11-28 08:13:28.490439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:46.451 [2024-11-28 08:13:28.490467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490484] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.451 [2024-11-28 08:13:28.490489] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.451 [2024-11-28 08:13:28.490492] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.451 [2024-11-28 08:13:28.490505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.451 [2024-11-28 08:13:28.490545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:46.451 [2024-11-28 08:13:28.490553] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:46.451 [2024-11-28 08:13:28.490558] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:46.451 [2024-11-28 08:13:28.490562] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:46.451 [2024-11-28 08:13:28.490566] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:46.451 [2024-11-28 08:13:28.490573] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:46.451 [2024-11-28 08:13:28.490577] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:46.451 [2024-11-28 08:13:28.490581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:46.451 [2024-11-28 08:13:28.490612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:46.451 [2024-11-28 08:13:28.490622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.451 [2024-11-28 08:13:28.490630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.451 [2024-11-28 08:13:28.490637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.451 [2024-11-28 08:13:28.490644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.451 [2024-11-28 08:13:28.490648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:46.451 [2024-11-28 08:13:28.490674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:46.451 [2024-11-28 08:13:28.490679] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:46.451 [2024-11-28 08:13:28.490684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.451 [2024-11-28 08:13:28.490720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:46.451 [2024-11-28 08:13:28.490770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490784] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:46.451 [2024-11-28 08:13:28.490788] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:46.451 [2024-11-28 08:13:28.490791] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.451 [2024-11-28 08:13:28.490799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:46.451 [2024-11-28 08:13:28.490811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:46.451 [2024-11-28 08:13:28.490821] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:46.451 [2024-11-28 08:13:28.490829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:46.451 [2024-11-28 08:13:28.490842] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.451 [2024-11-28 08:13:28.490846] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.451 [2024-11-28 08:13:28.490850] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.451 [2024-11-28 08:13:28.490855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.452 [2024-11-28 08:13:28.490875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:46.452 [2024-11-28 08:13:28.490885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:46.452 [2024-11-28 08:13:28.490891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:46.452 [2024-11-28 08:13:28.490898] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.452 [2024-11-28 08:13:28.490901] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.452 [2024-11-28 08:13:28.490905] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.452 [2024-11-28 08:13:28.490910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.452 [2024-11-28 08:13:28.490920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:46.452 [2024-11-28 08:13:28.490928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:46.452 [2024-11-28 08:13:28.490935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:46.452 [2024-11-28 08:13:28.490941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:46.452 [2024-11-28 08:13:28.490950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:46.452 [2024-11-28 08:13:28.490955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:46.452 [2024-11-28 08:13:28.490960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:46.452 [2024-11-28 08:13:28.490964] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:46.452 [2024-11-28 08:13:28.490968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:46.452 [2024-11-28 08:13:28.490974] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:46.452 [2024-11-28 08:13:28.490990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:46.452 [2024-11-28 08:13:28.490999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:46.452 [2024-11-28 08:13:28.491009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:46.452 [2024-11-28 08:13:28.491017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:46.452 [2024-11-28 08:13:28.491027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:46.452 [2024-11-28 08:13:28.491037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:46.452 [2024-11-28 08:13:28.491047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.452 [2024-11-28 08:13:28.491056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:46.452 [2024-11-28 08:13:28.491067] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:46.452 [2024-11-28 08:13:28.491071] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:46.452 [2024-11-28 08:13:28.491075] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:46.452 [2024-11-28 08:13:28.491078] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:46.452 [2024-11-28 08:13:28.491081] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:46.452 [2024-11-28 08:13:28.491087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:46.452 [2024-11-28 08:13:28.491093] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:46.452 [2024-11-28 08:13:28.491097] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:46.452 [2024-11-28 08:13:28.491100] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.452 [2024-11-28 08:13:28.491106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:46.452 [2024-11-28 08:13:28.491111] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:46.452 [2024-11-28 08:13:28.491116] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.452 [2024-11-28 08:13:28.491119] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.452 [2024-11-28 08:13:28.491124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.452 [2024-11-28 08:13:28.491130] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:46.452 [2024-11-28 08:13:28.491134] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:46.452 [2024-11-28 08:13:28.491137] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.452 [2024-11-28 08:13:28.491143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:46.452 [2024-11-28 08:13:28.491149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:46.452 [2024-11-28 08:13:28.491162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:46.452 [2024-11-28 08:13:28.491172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:46.452 [2024-11-28 08:13:28.491179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:46.452 ===================================================== 00:14:46.452 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:46.452 ===================================================== 00:14:46.452 Controller Capabilities/Features 00:14:46.452 ================================ 00:14:46.452 Vendor ID: 4e58 00:14:46.452 Subsystem Vendor ID: 4e58 00:14:46.452 Serial Number: SPDK1 00:14:46.452 Model Number: SPDK bdev Controller 00:14:46.452 Firmware Version: 25.01 00:14:46.452 Recommended Arb Burst: 6 00:14:46.452 IEEE OUI Identifier: 8d 6b 50 00:14:46.452 Multi-path I/O 00:14:46.452 May have multiple subsystem ports: Yes 00:14:46.452 May have multiple controllers: Yes 00:14:46.452 Associated with SR-IOV VF: No 00:14:46.452 Max Data Transfer Size: 131072 00:14:46.452 Max Number of Namespaces: 32 00:14:46.452 Max Number of I/O Queues: 127 00:14:46.452 NVMe Specification Version (VS): 1.3 00:14:46.452 NVMe Specification Version (Identify): 1.3 00:14:46.452 Maximum Queue Entries: 256 00:14:46.452 Contiguous Queues Required: Yes 00:14:46.452 Arbitration Mechanisms Supported 00:14:46.452 Weighted Round Robin: Not Supported 00:14:46.452 Vendor Specific: Not Supported 00:14:46.452 Reset Timeout: 15000 ms 00:14:46.452 Doorbell Stride: 4 bytes 00:14:46.452 NVM Subsystem Reset: Not Supported 00:14:46.452 Command Sets Supported 00:14:46.452 NVM Command Set: Supported 00:14:46.452 Boot Partition: Not Supported 00:14:46.452 Memory Page Size Minimum: 4096 bytes 00:14:46.452 Memory Page Size Maximum: 4096 bytes 00:14:46.452 Persistent Memory Region: Not Supported 00:14:46.452 Optional Asynchronous Events Supported 00:14:46.452 Namespace Attribute Notices: Supported 00:14:46.452 Firmware Activation Notices: Not Supported 00:14:46.452 ANA Change Notices: Not Supported 00:14:46.452 PLE Aggregate Log Change Notices: Not Supported 00:14:46.452 LBA Status Info Alert Notices: Not Supported 00:14:46.452 EGE Aggregate Log Change Notices: Not Supported 00:14:46.452 Normal NVM Subsystem Shutdown event: Not Supported 00:14:46.452 Zone Descriptor Change Notices: Not Supported 00:14:46.452 Discovery Log Change Notices: Not Supported 00:14:46.452 Controller Attributes 00:14:46.452 128-bit Host Identifier: Supported 00:14:46.452 Non-Operational Permissive Mode: Not Supported 00:14:46.452 NVM Sets: Not Supported 00:14:46.452 Read Recovery Levels: Not Supported 00:14:46.452 Endurance Groups: Not Supported 00:14:46.452 Predictable Latency Mode: Not Supported 00:14:46.452 Traffic Based Keep ALive: Not Supported 00:14:46.452 Namespace Granularity: Not Supported 00:14:46.452 SQ Associations: Not Supported 00:14:46.452 UUID List: Not Supported 00:14:46.452 Multi-Domain Subsystem: Not Supported 00:14:46.452 Fixed Capacity Management: Not Supported 00:14:46.452 Variable Capacity Management: Not Supported 00:14:46.452 Delete Endurance Group: Not Supported 00:14:46.452 Delete NVM Set: Not Supported 00:14:46.452 Extended LBA Formats Supported: Not Supported 00:14:46.452 Flexible Data Placement Supported: Not Supported 00:14:46.452 00:14:46.452 Controller Memory Buffer Support 00:14:46.452 ================================ 00:14:46.452 Supported: No 00:14:46.452 00:14:46.452 Persistent Memory Region Support 00:14:46.452 ================================ 00:14:46.452 Supported: No 00:14:46.452 00:14:46.452 Admin Command Set Attributes 00:14:46.452 ============================ 00:14:46.452 Security Send/Receive: Not Supported 00:14:46.452 Format NVM: Not Supported 00:14:46.452 Firmware Activate/Download: Not Supported 00:14:46.453 Namespace Management: Not Supported 00:14:46.453 Device Self-Test: Not Supported 00:14:46.453 Directives: Not Supported 00:14:46.453 NVMe-MI: Not Supported 00:14:46.453 Virtualization Management: Not Supported 00:14:46.453 Doorbell Buffer Config: Not Supported 00:14:46.453 Get LBA Status Capability: Not Supported 00:14:46.453 Command & Feature Lockdown Capability: Not Supported 00:14:46.453 Abort Command Limit: 4 00:14:46.453 Async Event Request Limit: 4 00:14:46.453 Number of Firmware Slots: N/A 00:14:46.453 Firmware Slot 1 Read-Only: N/A 00:14:46.453 Firmware Activation Without Reset: N/A 00:14:46.453 Multiple Update Detection Support: N/A 00:14:46.453 Firmware Update Granularity: No Information Provided 00:14:46.453 Per-Namespace SMART Log: No 00:14:46.453 Asymmetric Namespace Access Log Page: Not Supported 00:14:46.453 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:46.453 Command Effects Log Page: Supported 00:14:46.453 Get Log Page Extended Data: Supported 00:14:46.453 Telemetry Log Pages: Not Supported 00:14:46.453 Persistent Event Log Pages: Not Supported 00:14:46.453 Supported Log Pages Log Page: May Support 00:14:46.453 Commands Supported & Effects Log Page: Not Supported 00:14:46.453 Feature Identifiers & Effects Log Page:May Support 00:14:46.453 NVMe-MI Commands & Effects Log Page: May Support 00:14:46.453 Data Area 4 for Telemetry Log: Not Supported 00:14:46.453 Error Log Page Entries Supported: 128 00:14:46.453 Keep Alive: Supported 00:14:46.453 Keep Alive Granularity: 10000 ms 00:14:46.453 00:14:46.453 NVM Command Set Attributes 00:14:46.453 ========================== 00:14:46.453 Submission Queue Entry Size 00:14:46.453 Max: 64 00:14:46.453 Min: 64 00:14:46.453 Completion Queue Entry Size 00:14:46.453 Max: 16 00:14:46.453 Min: 16 00:14:46.453 Number of Namespaces: 32 00:14:46.453 Compare Command: Supported 00:14:46.453 Write Uncorrectable Command: Not Supported 00:14:46.453 Dataset Management Command: Supported 00:14:46.453 Write Zeroes Command: Supported 00:14:46.453 Set Features Save Field: Not Supported 00:14:46.453 Reservations: Not Supported 00:14:46.453 Timestamp: Not Supported 00:14:46.453 Copy: Supported 00:14:46.453 Volatile Write Cache: Present 00:14:46.453 Atomic Write Unit (Normal): 1 00:14:46.453 Atomic Write Unit (PFail): 1 00:14:46.453 Atomic Compare & Write Unit: 1 00:14:46.453 Fused Compare & Write: Supported 00:14:46.453 Scatter-Gather List 00:14:46.453 SGL Command Set: Supported (Dword aligned) 00:14:46.453 SGL Keyed: Not Supported 00:14:46.453 SGL Bit Bucket Descriptor: Not Supported 00:14:46.453 SGL Metadata Pointer: Not Supported 00:14:46.453 Oversized SGL: Not Supported 00:14:46.453 SGL Metadata Address: Not Supported 00:14:46.453 SGL Offset: Not Supported 00:14:46.453 Transport SGL Data Block: Not Supported 00:14:46.453 Replay Protected Memory Block: Not Supported 00:14:46.453 00:14:46.453 Firmware Slot Information 00:14:46.453 ========================= 00:14:46.453 Active slot: 1 00:14:46.453 Slot 1 Firmware Revision: 25.01 00:14:46.453 00:14:46.453 00:14:46.453 Commands Supported and Effects 00:14:46.453 ============================== 00:14:46.453 Admin Commands 00:14:46.453 -------------- 00:14:46.453 Get Log Page (02h): Supported 00:14:46.453 Identify (06h): Supported 00:14:46.453 Abort (08h): Supported 00:14:46.453 Set Features (09h): Supported 00:14:46.453 Get Features (0Ah): Supported 00:14:46.453 Asynchronous Event Request (0Ch): Supported 00:14:46.453 Keep Alive (18h): Supported 00:14:46.453 I/O Commands 00:14:46.453 ------------ 00:14:46.453 Flush (00h): Supported LBA-Change 00:14:46.453 Write (01h): Supported LBA-Change 00:14:46.453 Read (02h): Supported 00:14:46.453 Compare (05h): Supported 00:14:46.453 Write Zeroes (08h): Supported LBA-Change 00:14:46.453 Dataset Management (09h): Supported LBA-Change 00:14:46.453 Copy (19h): Supported LBA-Change 00:14:46.453 00:14:46.453 Error Log 00:14:46.453 ========= 00:14:46.453 00:14:46.453 Arbitration 00:14:46.453 =========== 00:14:46.453 Arbitration Burst: 1 00:14:46.453 00:14:46.453 Power Management 00:14:46.453 ================ 00:14:46.453 Number of Power States: 1 00:14:46.453 Current Power State: Power State #0 00:14:46.453 Power State #0: 00:14:46.453 Max Power: 0.00 W 00:14:46.453 Non-Operational State: Operational 00:14:46.453 Entry Latency: Not Reported 00:14:46.453 Exit Latency: Not Reported 00:14:46.453 Relative Read Throughput: 0 00:14:46.453 Relative Read Latency: 0 00:14:46.453 Relative Write Throughput: 0 00:14:46.453 Relative Write Latency: 0 00:14:46.453 Idle Power: Not Reported 00:14:46.453 Active Power: Not Reported 00:14:46.453 Non-Operational Permissive Mode: Not Supported 00:14:46.453 00:14:46.453 Health Information 00:14:46.453 ================== 00:14:46.453 Critical Warnings: 00:14:46.453 Available Spare Space: OK 00:14:46.453 Temperature: OK 00:14:46.453 Device Reliability: OK 00:14:46.453 Read Only: No 00:14:46.453 Volatile Memory Backup: OK 00:14:46.453 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:46.453 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:46.453 Available Spare: 0% 00:14:46.453 Available Sp[2024-11-28 08:13:28.491264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:46.453 [2024-11-28 08:13:28.491274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:46.453 [2024-11-28 08:13:28.491299] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:46.453 [2024-11-28 08:13:28.491308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.453 [2024-11-28 08:13:28.491313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.453 [2024-11-28 08:13:28.491319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.453 [2024-11-28 08:13:28.491325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.453 [2024-11-28 08:13:28.494954] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:46.453 [2024-11-28 08:13:28.494966] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:46.453 [2024-11-28 08:13:28.495449] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.453 [2024-11-28 08:13:28.495505] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:46.453 [2024-11-28 08:13:28.495511] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:46.453 [2024-11-28 08:13:28.496457] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:46.453 [2024-11-28 08:13:28.496468] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:46.453 [2024-11-28 08:13:28.496524] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:46.453 [2024-11-28 08:13:28.498490] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.453 are Threshold: 0% 00:14:46.453 Life Percentage Used: 0% 00:14:46.453 Data Units Read: 0 00:14:46.453 Data Units Written: 0 00:14:46.453 Host Read Commands: 0 00:14:46.453 Host Write Commands: 0 00:14:46.453 Controller Busy Time: 0 minutes 00:14:46.453 Power Cycles: 0 00:14:46.453 Power On Hours: 0 hours 00:14:46.453 Unsafe Shutdowns: 0 00:14:46.453 Unrecoverable Media Errors: 0 00:14:46.453 Lifetime Error Log Entries: 0 00:14:46.453 Warning Temperature Time: 0 minutes 00:14:46.453 Critical Temperature Time: 0 minutes 00:14:46.453 00:14:46.453 Number of Queues 00:14:46.453 ================ 00:14:46.453 Number of I/O Submission Queues: 127 00:14:46.453 Number of I/O Completion Queues: 127 00:14:46.453 00:14:46.453 Active Namespaces 00:14:46.453 ================= 00:14:46.453 Namespace ID:1 00:14:46.453 Error Recovery Timeout: Unlimited 00:14:46.453 Command Set Identifier: NVM (00h) 00:14:46.453 Deallocate: Supported 00:14:46.453 Deallocated/Unwritten Error: Not Supported 00:14:46.453 Deallocated Read Value: Unknown 00:14:46.453 Deallocate in Write Zeroes: Not Supported 00:14:46.453 Deallocated Guard Field: 0xFFFF 00:14:46.453 Flush: Supported 00:14:46.453 Reservation: Supported 00:14:46.453 Namespace Sharing Capabilities: Multiple Controllers 00:14:46.454 Size (in LBAs): 131072 (0GiB) 00:14:46.454 Capacity (in LBAs): 131072 (0GiB) 00:14:46.454 Utilization (in LBAs): 131072 (0GiB) 00:14:46.454 NGUID: 0145181954CB4F079C76E02943BB9D3C 00:14:46.454 UUID: 01451819-54cb-4f07-9c76-e02943bb9d3c 00:14:46.454 Thin Provisioning: Not Supported 00:14:46.454 Per-NS Atomic Units: Yes 00:14:46.454 Atomic Boundary Size (Normal): 0 00:14:46.454 Atomic Boundary Size (PFail): 0 00:14:46.454 Atomic Boundary Offset: 0 00:14:46.454 Maximum Single Source Range Length: 65535 00:14:46.454 Maximum Copy Length: 65535 00:14:46.454 Maximum Source Range Count: 1 00:14:46.454 NGUID/EUI64 Never Reused: No 00:14:46.454 Namespace Write Protected: No 00:14:46.454 Number of LBA Formats: 1 00:14:46.454 Current LBA Format: LBA Format #00 00:14:46.454 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:46.454 00:14:46.454 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:46.713 [2024-11-28 08:13:28.734785] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.989 Initializing NVMe Controllers 00:14:51.989 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:51.989 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:51.989 Initialization complete. Launching workers. 00:14:51.989 ======================================================== 00:14:51.989 Latency(us) 00:14:51.989 Device Information : IOPS MiB/s Average min max 00:14:51.989 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39818.55 155.54 3214.42 1005.50 10570.15 00:14:51.989 ======================================================== 00:14:51.989 Total : 39818.55 155.54 3214.42 1005.50 10570.15 00:14:51.989 00:14:51.989 [2024-11-28 08:13:33.759904] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.989 08:13:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:51.989 [2024-11-28 08:13:33.998989] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.263 Initializing NVMe Controllers 00:14:57.263 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.263 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:57.263 Initialization complete. Launching workers. 00:14:57.263 ======================================================== 00:14:57.263 Latency(us) 00:14:57.263 Device Information : IOPS MiB/s Average min max 00:14:57.263 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.40 62.65 7987.72 6017.35 15521.13 00:14:57.263 ======================================================== 00:14:57.263 Total : 16038.40 62.65 7987.72 6017.35 15521.13 00:14:57.263 00:14:57.263 [2024-11-28 08:13:39.036149] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.263 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:57.263 [2024-11-28 08:13:39.241110] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.535 [2024-11-28 08:13:44.320298] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.535 Initializing NVMe Controllers 00:15:02.535 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.535 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.535 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:02.535 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:02.535 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:02.535 Initialization complete. Launching workers. 00:15:02.535 Starting thread on core 2 00:15:02.535 Starting thread on core 3 00:15:02.535 Starting thread on core 1 00:15:02.535 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:02.535 [2024-11-28 08:13:44.618416] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.825 [2024-11-28 08:13:47.670970] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.825 Initializing NVMe Controllers 00:15:05.825 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.825 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.825 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:05.825 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:05.825 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:05.825 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:05.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:05.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:05.825 Initialization complete. Launching workers. 00:15:05.825 Starting thread on core 1 with urgent priority queue 00:15:05.825 Starting thread on core 2 with urgent priority queue 00:15:05.825 Starting thread on core 3 with urgent priority queue 00:15:05.825 Starting thread on core 0 with urgent priority queue 00:15:05.825 SPDK bdev Controller (SPDK1 ) core 0: 8654.33 IO/s 11.55 secs/100000 ios 00:15:05.825 SPDK bdev Controller (SPDK1 ) core 1: 7988.33 IO/s 12.52 secs/100000 ios 00:15:05.825 SPDK bdev Controller (SPDK1 ) core 2: 8020.33 IO/s 12.47 secs/100000 ios 00:15:05.825 SPDK bdev Controller (SPDK1 ) core 3: 8385.67 IO/s 11.93 secs/100000 ios 00:15:05.825 ======================================================== 00:15:05.825 00:15:05.825 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:05.825 [2024-11-28 08:13:47.954362] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.825 Initializing NVMe Controllers 00:15:05.825 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.825 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.825 Namespace ID: 1 size: 0GB 00:15:05.825 Initialization complete. 00:15:05.825 INFO: using host memory buffer for IO 00:15:05.825 Hello world! 00:15:05.825 [2024-11-28 08:13:47.988623] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.825 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:06.084 [2024-11-28 08:13:48.271370] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.461 Initializing NVMe Controllers 00:15:07.461 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.461 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.461 Initialization complete. Launching workers. 00:15:07.461 submit (in ns) avg, min, max = 6202.8, 3206.1, 5996845.2 00:15:07.461 complete (in ns) avg, min, max = 21314.9, 1781.7, 6990045.2 00:15:07.461 00:15:07.461 Submit histogram 00:15:07.461 ================ 00:15:07.461 Range in us Cumulative Count 00:15:07.461 3.200 - 3.214: 0.0062% ( 1) 00:15:07.461 3.214 - 3.228: 0.0371% ( 5) 00:15:07.461 3.228 - 3.242: 0.0928% ( 9) 00:15:07.461 3.242 - 3.256: 0.1855% ( 15) 00:15:07.461 3.256 - 3.270: 0.5565% ( 60) 00:15:07.461 3.270 - 3.283: 1.1625% ( 98) 00:15:07.461 3.283 - 3.297: 1.7376% ( 93) 00:15:07.461 3.297 - 3.311: 2.4610% ( 117) 00:15:07.461 3.311 - 3.325: 3.5617% ( 178) 00:15:07.461 3.325 - 3.339: 5.2127% ( 267) 00:15:07.461 3.339 - 3.353: 8.9537% ( 605) 00:15:07.461 3.353 - 3.367: 14.7848% ( 943) 00:15:07.461 3.367 - 3.381: 20.6592% ( 950) 00:15:07.461 3.381 - 3.395: 26.6448% ( 968) 00:15:07.461 3.395 - 3.409: 32.9829% ( 1025) 00:15:07.461 3.409 - 3.423: 38.7893% ( 939) 00:15:07.461 3.423 - 3.437: 44.3606% ( 901) 00:15:07.461 3.437 - 3.450: 49.7341% ( 869) 00:15:07.461 3.450 - 3.464: 53.6977% ( 641) 00:15:07.461 3.464 - 3.478: 57.8778% ( 676) 00:15:07.461 3.478 - 3.492: 63.7089% ( 943) 00:15:07.461 3.492 - 3.506: 69.7069% ( 970) 00:15:07.461 3.506 - 3.520: 73.5778% ( 626) 00:15:07.461 3.520 - 3.534: 77.6094% ( 652) 00:15:07.461 3.534 - 3.548: 81.9998% ( 710) 00:15:07.461 3.548 - 3.562: 84.8318% ( 458) 00:15:07.461 3.562 - 3.590: 87.6948% ( 463) 00:15:07.461 3.590 - 3.617: 88.4368% ( 120) 00:15:07.461 3.617 - 3.645: 89.3767% ( 152) 00:15:07.461 3.645 - 3.673: 91.0030% ( 263) 00:15:07.461 3.673 - 3.701: 92.6911% ( 273) 00:15:07.461 3.701 - 3.729: 94.2617% ( 254) 00:15:07.461 3.729 - 3.757: 95.9622% ( 275) 00:15:07.461 3.757 - 3.784: 97.4029% ( 233) 00:15:07.461 3.784 - 3.812: 98.2872% ( 143) 00:15:07.461 3.812 - 3.840: 99.0045% ( 116) 00:15:07.461 3.840 - 3.868: 99.3569% ( 57) 00:15:07.461 3.868 - 3.896: 99.5486% ( 31) 00:15:07.461 3.896 - 3.923: 99.6104% ( 10) 00:15:07.461 3.923 - 3.951: 99.6352% ( 4) 00:15:07.461 3.951 - 3.979: 99.6475% ( 2) 00:15:07.461 4.007 - 4.035: 99.6537% ( 1) 00:15:07.461 4.090 - 4.118: 99.6599% ( 1) 00:15:07.461 5.426 - 5.454: 99.6723% ( 2) 00:15:07.461 5.482 - 5.510: 99.6846% ( 2) 00:15:07.461 5.565 - 5.593: 99.6908% ( 1) 00:15:07.461 5.593 - 5.621: 99.6970% ( 1) 00:15:07.461 5.621 - 5.649: 99.7032% ( 1) 00:15:07.461 5.649 - 5.677: 99.7156% ( 2) 00:15:07.461 5.760 - 5.788: 99.7403% ( 4) 00:15:07.461 5.871 - 5.899: 99.7465% ( 1) 00:15:07.461 5.899 - 5.927: 99.7588% ( 2) 00:15:07.461 6.150 - 6.177: 99.7650% ( 1) 00:15:07.461 6.205 - 6.233: 99.7712% ( 1) 00:15:07.461 6.233 - 6.261: 99.7774% ( 1) 00:15:07.461 6.317 - 6.344: 99.7836% ( 1) 00:15:07.461 6.428 - 6.456: 99.7898% ( 1) 00:15:07.461 6.511 - 6.539: 99.8021% ( 2) 00:15:07.461 6.567 - 6.595: 99.8083% ( 1) 00:15:07.461 6.845 - 6.873: 99.8145% ( 1) 00:15:07.461 6.873 - 6.901: 99.8207% ( 1) 00:15:07.461 6.901 - 6.929: 99.8269% ( 1) 00:15:07.461 7.012 - 7.040: 99.8330% ( 1) 00:15:07.461 7.068 - 7.096: 99.8392% ( 1) 00:15:07.461 7.123 - 7.179: 99.8516% ( 2) 00:15:07.461 7.179 - 7.235: 99.8578% ( 1) 00:15:07.461 7.235 - 7.290: 99.8640% ( 1) 00:15:07.461 7.513 - 7.569: 99.8701% ( 1) 00:15:07.461 7.624 - 7.680: 99.8763% ( 1) 00:15:07.461 7.736 - 7.791: 99.8825% ( 1) 00:15:07.461 7.791 - 7.847: 99.8887% ( 1) 00:15:07.461 7.903 - 7.958: 99.8949% ( 1) 00:15:07.461 8.237 - 8.292: 99.9011% ( 1) 00:15:07.461 12.689 - 12.744: 99.9072% ( 1) 00:15:07.461 13.802 - 13.857: 99.9134% ( 1) 00:15:07.461 14.136 - 14.191: 99.9196% ( 1) 00:15:07.461 18.922 - 19.033: 99.9258% ( 1) 00:15:07.461 19.033 - 19.144: 99.9320% ( 1) 00:15:07.461 2023.068 - 2037.315: 99.9382% ( 1) 00:15:07.461 [2024-11-28 08:13:49.296394] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.461 3989.148 - 4017.642: 99.9938% ( 9) 00:15:07.461 5983.722 - 6012.216: 100.0000% ( 1) 00:15:07.461 00:15:07.461 Complete histogram 00:15:07.461 ================== 00:15:07.461 Range in us Cumulative Count 00:15:07.461 1.781 - 1.795: 0.0866% ( 14) 00:15:07.461 1.795 - 1.809: 0.1917% ( 17) 00:15:07.461 1.809 - 1.823: 1.2491% ( 171) 00:15:07.462 1.823 - 1.837: 6.0599% ( 778) 00:15:07.462 1.837 - 1.850: 8.9228% ( 463) 00:15:07.462 1.850 - 1.864: 10.7346% ( 293) 00:15:07.462 1.864 - 1.878: 11.8662% ( 183) 00:15:07.462 1.878 - 1.892: 39.1355% ( 4410) 00:15:07.462 1.892 - 1.906: 81.3690% ( 6830) 00:15:07.462 1.906 - 1.920: 91.7326% ( 1676) 00:15:07.462 1.920 - 1.934: 94.7316% ( 485) 00:15:07.462 1.934 - 1.948: 95.4118% ( 110) 00:15:07.462 1.948 - 1.962: 96.5805% ( 189) 00:15:07.462 1.962 - 1.976: 98.2933% ( 277) 00:15:07.462 1.976 - 1.990: 98.9488% ( 106) 00:15:07.462 1.990 - 2.003: 99.1838% ( 38) 00:15:07.462 2.003 - 2.017: 99.2765% ( 15) 00:15:07.462 2.017 - 2.031: 99.2951% ( 3) 00:15:07.462 2.031 - 2.045: 99.3074% ( 2) 00:15:07.462 2.059 - 2.073: 99.3136% ( 1) 00:15:07.462 2.101 - 2.115: 99.3198% ( 1) 00:15:07.462 2.268 - 2.282: 99.3260% ( 1) 00:15:07.462 2.323 - 2.337: 99.3322% ( 1) 00:15:07.462 2.337 - 2.351: 99.3445% ( 2) 00:15:07.462 2.365 - 2.379: 99.3569% ( 2) 00:15:07.462 2.421 - 2.435: 99.3631% ( 1) 00:15:07.462 3.492 - 3.506: 99.3693% ( 1) 00:15:07.462 3.548 - 3.562: 99.3755% ( 1) 00:15:07.462 3.562 - 3.590: 99.3816% ( 1) 00:15:07.462 3.757 - 3.784: 99.3878% ( 1) 00:15:07.462 3.840 - 3.868: 99.3940% ( 1) 00:15:07.462 4.090 - 4.118: 99.4002% ( 1) 00:15:07.462 4.174 - 4.202: 99.4064% ( 1) 00:15:07.462 4.202 - 4.230: 99.4126% ( 1) 00:15:07.462 4.285 - 4.313: 99.4187% ( 1) 00:15:07.462 4.369 - 4.397: 99.4311% ( 2) 00:15:07.462 4.452 - 4.480: 99.4435% ( 2) 00:15:07.462 4.675 - 4.703: 99.4497% ( 1) 00:15:07.462 4.703 - 4.730: 99.4558% ( 1) 00:15:07.462 4.925 - 4.953: 99.4620% ( 1) 00:15:07.462 5.148 - 5.176: 99.4682% ( 1) 00:15:07.462 5.649 - 5.677: 99.4744% ( 1) 00:15:07.462 5.677 - 5.704: 99.4806% ( 1) 00:15:07.462 5.704 - 5.732: 99.4868% ( 1) 00:15:07.462 5.816 - 5.843: 99.4930% ( 1) 00:15:07.462 5.983 - 6.010: 99.4991% ( 1) 00:15:07.462 7.235 - 7.290: 99.5053% ( 1) 00:15:07.462 12.243 - 12.299: 99.5115% ( 1) 00:15:07.462 12.299 - 12.355: 99.5177% ( 1) 00:15:07.462 1018.657 - 1025.781: 99.5239% ( 1) 00:15:07.462 2421.983 - 2436.230: 99.5301% ( 1) 00:15:07.462 3989.148 - 4017.642: 99.9814% ( 73) 00:15:07.462 4986.435 - 5014.929: 99.9876% ( 1) 00:15:07.462 6981.009 - 7009.503: 100.0000% ( 2) 00:15:07.462 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:07.462 [ 00:15:07.462 { 00:15:07.462 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:07.462 "subtype": "Discovery", 00:15:07.462 "listen_addresses": [], 00:15:07.462 "allow_any_host": true, 00:15:07.462 "hosts": [] 00:15:07.462 }, 00:15:07.462 { 00:15:07.462 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:07.462 "subtype": "NVMe", 00:15:07.462 "listen_addresses": [ 00:15:07.462 { 00:15:07.462 "trtype": "VFIOUSER", 00:15:07.462 "adrfam": "IPv4", 00:15:07.462 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:07.462 "trsvcid": "0" 00:15:07.462 } 00:15:07.462 ], 00:15:07.462 "allow_any_host": true, 00:15:07.462 "hosts": [], 00:15:07.462 "serial_number": "SPDK1", 00:15:07.462 "model_number": "SPDK bdev Controller", 00:15:07.462 "max_namespaces": 32, 00:15:07.462 "min_cntlid": 1, 00:15:07.462 "max_cntlid": 65519, 00:15:07.462 "namespaces": [ 00:15:07.462 { 00:15:07.462 "nsid": 1, 00:15:07.462 "bdev_name": "Malloc1", 00:15:07.462 "name": "Malloc1", 00:15:07.462 "nguid": "0145181954CB4F079C76E02943BB9D3C", 00:15:07.462 "uuid": "01451819-54cb-4f07-9c76-e02943bb9d3c" 00:15:07.462 } 00:15:07.462 ] 00:15:07.462 }, 00:15:07.462 { 00:15:07.462 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:07.462 "subtype": "NVMe", 00:15:07.462 "listen_addresses": [ 00:15:07.462 { 00:15:07.462 "trtype": "VFIOUSER", 00:15:07.462 "adrfam": "IPv4", 00:15:07.462 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:07.462 "trsvcid": "0" 00:15:07.462 } 00:15:07.462 ], 00:15:07.462 "allow_any_host": true, 00:15:07.462 "hosts": [], 00:15:07.462 "serial_number": "SPDK2", 00:15:07.462 "model_number": "SPDK bdev Controller", 00:15:07.462 "max_namespaces": 32, 00:15:07.462 "min_cntlid": 1, 00:15:07.462 "max_cntlid": 65519, 00:15:07.462 "namespaces": [ 00:15:07.462 { 00:15:07.462 "nsid": 1, 00:15:07.462 "bdev_name": "Malloc2", 00:15:07.462 "name": "Malloc2", 00:15:07.462 "nguid": "DC683C294F4D49E19FDF3B45E0172C26", 00:15:07.462 "uuid": "dc683c29-4f4d-49e1-9fdf-3b45e0172c26" 00:15:07.462 } 00:15:07.462 ] 00:15:07.462 } 00:15:07.462 ] 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1328204 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:07.462 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:07.462 [2024-11-28 08:13:49.691007] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.722 Malloc3 00:15:07.722 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:07.722 [2024-11-28 08:13:49.950042] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.722 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:07.982 Asynchronous Event Request test 00:15:07.982 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.982 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.982 Registering asynchronous event callbacks... 00:15:07.982 Starting namespace attribute notice tests for all controllers... 00:15:07.982 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:07.982 aer_cb - Changed Namespace 00:15:07.982 Cleaning up... 00:15:07.982 [ 00:15:07.982 { 00:15:07.982 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:07.982 "subtype": "Discovery", 00:15:07.982 "listen_addresses": [], 00:15:07.982 "allow_any_host": true, 00:15:07.982 "hosts": [] 00:15:07.982 }, 00:15:07.982 { 00:15:07.982 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:07.982 "subtype": "NVMe", 00:15:07.982 "listen_addresses": [ 00:15:07.982 { 00:15:07.982 "trtype": "VFIOUSER", 00:15:07.982 "adrfam": "IPv4", 00:15:07.982 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:07.982 "trsvcid": "0" 00:15:07.982 } 00:15:07.982 ], 00:15:07.982 "allow_any_host": true, 00:15:07.982 "hosts": [], 00:15:07.982 "serial_number": "SPDK1", 00:15:07.982 "model_number": "SPDK bdev Controller", 00:15:07.982 "max_namespaces": 32, 00:15:07.982 "min_cntlid": 1, 00:15:07.982 "max_cntlid": 65519, 00:15:07.982 "namespaces": [ 00:15:07.982 { 00:15:07.982 "nsid": 1, 00:15:07.982 "bdev_name": "Malloc1", 00:15:07.982 "name": "Malloc1", 00:15:07.982 "nguid": "0145181954CB4F079C76E02943BB9D3C", 00:15:07.982 "uuid": "01451819-54cb-4f07-9c76-e02943bb9d3c" 00:15:07.982 }, 00:15:07.982 { 00:15:07.982 "nsid": 2, 00:15:07.982 "bdev_name": "Malloc3", 00:15:07.982 "name": "Malloc3", 00:15:07.982 "nguid": "6D456BCBBABC47DEB2478A64A3273629", 00:15:07.982 "uuid": "6d456bcb-babc-47de-b247-8a64a3273629" 00:15:07.982 } 00:15:07.982 ] 00:15:07.982 }, 00:15:07.982 { 00:15:07.982 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:07.982 "subtype": "NVMe", 00:15:07.982 "listen_addresses": [ 00:15:07.982 { 00:15:07.982 "trtype": "VFIOUSER", 00:15:07.982 "adrfam": "IPv4", 00:15:07.982 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:07.982 "trsvcid": "0" 00:15:07.982 } 00:15:07.982 ], 00:15:07.982 "allow_any_host": true, 00:15:07.982 "hosts": [], 00:15:07.982 "serial_number": "SPDK2", 00:15:07.982 "model_number": "SPDK bdev Controller", 00:15:07.982 "max_namespaces": 32, 00:15:07.982 "min_cntlid": 1, 00:15:07.982 "max_cntlid": 65519, 00:15:07.982 "namespaces": [ 00:15:07.982 { 00:15:07.982 "nsid": 1, 00:15:07.982 "bdev_name": "Malloc2", 00:15:07.982 "name": "Malloc2", 00:15:07.982 "nguid": "DC683C294F4D49E19FDF3B45E0172C26", 00:15:07.982 "uuid": "dc683c29-4f4d-49e1-9fdf-3b45e0172c26" 00:15:07.982 } 00:15:07.982 ] 00:15:07.982 } 00:15:07.982 ] 00:15:07.982 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1328204 00:15:07.982 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:07.982 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:07.982 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:07.982 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:07.982 [2024-11-28 08:13:50.204639] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:15:07.982 [2024-11-28 08:13:50.204671] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328423 ] 00:15:07.982 [2024-11-28 08:13:50.246540] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:07.982 [2024-11-28 08:13:50.248785] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:07.982 [2024-11-28 08:13:50.248811] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f16988d7000 00:15:08.244 [2024-11-28 08:13:50.252952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.244 [2024-11-28 08:13:50.253806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.244 [2024-11-28 08:13:50.254810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.244 [2024-11-28 08:13:50.255821] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.244 [2024-11-28 08:13:50.256825] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.244 [2024-11-28 08:13:50.257829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.244 [2024-11-28 08:13:50.258836] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.244 [2024-11-28 08:13:50.259850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.244 [2024-11-28 08:13:50.260857] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:08.244 [2024-11-28 08:13:50.260867] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f16988cc000 00:15:08.244 [2024-11-28 08:13:50.261882] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:08.244 [2024-11-28 08:13:50.271875] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:08.244 [2024-11-28 08:13:50.271901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:08.244 [2024-11-28 08:13:50.276970] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:08.244 [2024-11-28 08:13:50.277013] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:08.244 [2024-11-28 08:13:50.277088] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:08.244 [2024-11-28 08:13:50.277103] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:08.244 [2024-11-28 08:13:50.277108] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:08.244 [2024-11-28 08:13:50.277975] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:08.244 [2024-11-28 08:13:50.277987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:08.244 [2024-11-28 08:13:50.277994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:08.244 [2024-11-28 08:13:50.278984] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:08.244 [2024-11-28 08:13:50.278993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:08.244 [2024-11-28 08:13:50.279000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:08.244 [2024-11-28 08:13:50.280005] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:08.244 [2024-11-28 08:13:50.280013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:08.244 [2024-11-28 08:13:50.281010] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:08.244 [2024-11-28 08:13:50.281019] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:08.244 [2024-11-28 08:13:50.281024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:08.244 [2024-11-28 08:13:50.281030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:08.244 [2024-11-28 08:13:50.281138] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:08.244 [2024-11-28 08:13:50.281142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:08.244 [2024-11-28 08:13:50.281147] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:08.244 [2024-11-28 08:13:50.282016] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:08.244 [2024-11-28 08:13:50.283028] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:08.244 [2024-11-28 08:13:50.284040] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:08.244 [2024-11-28 08:13:50.285049] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.244 [2024-11-28 08:13:50.285091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:08.244 [2024-11-28 08:13:50.286055] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:08.244 [2024-11-28 08:13:50.286064] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:08.244 [2024-11-28 08:13:50.286068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:08.244 [2024-11-28 08:13:50.286085] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:08.244 [2024-11-28 08:13:50.286093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:08.244 [2024-11-28 08:13:50.286106] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.244 [2024-11-28 08:13:50.286111] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.244 [2024-11-28 08:13:50.286115] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.244 [2024-11-28 08:13:50.286126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.244 [2024-11-28 08:13:50.293957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:08.244 [2024-11-28 08:13:50.293969] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:08.244 [2024-11-28 08:13:50.293974] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:08.244 [2024-11-28 08:13:50.293978] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:08.244 [2024-11-28 08:13:50.293982] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:08.244 [2024-11-28 08:13:50.293987] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:08.244 [2024-11-28 08:13:50.293991] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:08.244 [2024-11-28 08:13:50.293995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:08.244 [2024-11-28 08:13:50.294001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:08.244 [2024-11-28 08:13:50.294011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:08.244 [2024-11-28 08:13:50.301953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.301965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.245 [2024-11-28 08:13:50.301973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.245 [2024-11-28 08:13:50.301983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.245 [2024-11-28 08:13:50.301991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.245 [2024-11-28 08:13:50.301995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.302004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.302014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.309954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.309961] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:08.245 [2024-11-28 08:13:50.309966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.309976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.309982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.309990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.317953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.318011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.318019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.318026] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:08.245 [2024-11-28 08:13:50.318031] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:08.245 [2024-11-28 08:13:50.318034] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.245 [2024-11-28 08:13:50.318040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.325951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.325966] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:08.245 [2024-11-28 08:13:50.325977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.325984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.325990] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.245 [2024-11-28 08:13:50.325994] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.245 [2024-11-28 08:13:50.325997] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.245 [2024-11-28 08:13:50.326003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.333954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.333966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.333972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.333979] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.245 [2024-11-28 08:13:50.333983] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.245 [2024-11-28 08:13:50.333986] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.245 [2024-11-28 08:13:50.333992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.341953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.341964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.341971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.341978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.341983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.341988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.341993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.341997] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:08.245 [2024-11-28 08:13:50.342001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:08.245 [2024-11-28 08:13:50.342006] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:08.245 [2024-11-28 08:13:50.342023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.349953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.349965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.357952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.357965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.365953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.365965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.373973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.373993] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:08.245 [2024-11-28 08:13:50.373998] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:08.245 [2024-11-28 08:13:50.374001] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:08.245 [2024-11-28 08:13:50.374004] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:08.245 [2024-11-28 08:13:50.374007] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:08.245 [2024-11-28 08:13:50.374014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:08.245 [2024-11-28 08:13:50.374020] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:08.245 [2024-11-28 08:13:50.374024] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:08.245 [2024-11-28 08:13:50.374027] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.245 [2024-11-28 08:13:50.374033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.374039] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:08.245 [2024-11-28 08:13:50.374043] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.245 [2024-11-28 08:13:50.374046] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.245 [2024-11-28 08:13:50.374051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.374058] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:08.245 [2024-11-28 08:13:50.374062] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:08.245 [2024-11-28 08:13:50.374065] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.245 [2024-11-28 08:13:50.374070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:08.245 [2024-11-28 08:13:50.381953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.381968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.381978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:08.245 [2024-11-28 08:13:50.381985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:08.245 ===================================================== 00:15:08.245 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:08.245 ===================================================== 00:15:08.245 Controller Capabilities/Features 00:15:08.245 ================================ 00:15:08.245 Vendor ID: 4e58 00:15:08.245 Subsystem Vendor ID: 4e58 00:15:08.245 Serial Number: SPDK2 00:15:08.245 Model Number: SPDK bdev Controller 00:15:08.245 Firmware Version: 25.01 00:15:08.245 Recommended Arb Burst: 6 00:15:08.245 IEEE OUI Identifier: 8d 6b 50 00:15:08.245 Multi-path I/O 00:15:08.245 May have multiple subsystem ports: Yes 00:15:08.245 May have multiple controllers: Yes 00:15:08.245 Associated with SR-IOV VF: No 00:15:08.245 Max Data Transfer Size: 131072 00:15:08.246 Max Number of Namespaces: 32 00:15:08.246 Max Number of I/O Queues: 127 00:15:08.246 NVMe Specification Version (VS): 1.3 00:15:08.246 NVMe Specification Version (Identify): 1.3 00:15:08.246 Maximum Queue Entries: 256 00:15:08.246 Contiguous Queues Required: Yes 00:15:08.246 Arbitration Mechanisms Supported 00:15:08.246 Weighted Round Robin: Not Supported 00:15:08.246 Vendor Specific: Not Supported 00:15:08.246 Reset Timeout: 15000 ms 00:15:08.246 Doorbell Stride: 4 bytes 00:15:08.246 NVM Subsystem Reset: Not Supported 00:15:08.246 Command Sets Supported 00:15:08.246 NVM Command Set: Supported 00:15:08.246 Boot Partition: Not Supported 00:15:08.246 Memory Page Size Minimum: 4096 bytes 00:15:08.246 Memory Page Size Maximum: 4096 bytes 00:15:08.246 Persistent Memory Region: Not Supported 00:15:08.246 Optional Asynchronous Events Supported 00:15:08.246 Namespace Attribute Notices: Supported 00:15:08.246 Firmware Activation Notices: Not Supported 00:15:08.246 ANA Change Notices: Not Supported 00:15:08.246 PLE Aggregate Log Change Notices: Not Supported 00:15:08.246 LBA Status Info Alert Notices: Not Supported 00:15:08.246 EGE Aggregate Log Change Notices: Not Supported 00:15:08.246 Normal NVM Subsystem Shutdown event: Not Supported 00:15:08.246 Zone Descriptor Change Notices: Not Supported 00:15:08.246 Discovery Log Change Notices: Not Supported 00:15:08.246 Controller Attributes 00:15:08.246 128-bit Host Identifier: Supported 00:15:08.246 Non-Operational Permissive Mode: Not Supported 00:15:08.246 NVM Sets: Not Supported 00:15:08.246 Read Recovery Levels: Not Supported 00:15:08.246 Endurance Groups: Not Supported 00:15:08.246 Predictable Latency Mode: Not Supported 00:15:08.246 Traffic Based Keep ALive: Not Supported 00:15:08.246 Namespace Granularity: Not Supported 00:15:08.246 SQ Associations: Not Supported 00:15:08.246 UUID List: Not Supported 00:15:08.246 Multi-Domain Subsystem: Not Supported 00:15:08.246 Fixed Capacity Management: Not Supported 00:15:08.246 Variable Capacity Management: Not Supported 00:15:08.246 Delete Endurance Group: Not Supported 00:15:08.246 Delete NVM Set: Not Supported 00:15:08.246 Extended LBA Formats Supported: Not Supported 00:15:08.246 Flexible Data Placement Supported: Not Supported 00:15:08.246 00:15:08.246 Controller Memory Buffer Support 00:15:08.246 ================================ 00:15:08.246 Supported: No 00:15:08.246 00:15:08.246 Persistent Memory Region Support 00:15:08.246 ================================ 00:15:08.246 Supported: No 00:15:08.246 00:15:08.246 Admin Command Set Attributes 00:15:08.246 ============================ 00:15:08.246 Security Send/Receive: Not Supported 00:15:08.246 Format NVM: Not Supported 00:15:08.246 Firmware Activate/Download: Not Supported 00:15:08.246 Namespace Management: Not Supported 00:15:08.246 Device Self-Test: Not Supported 00:15:08.246 Directives: Not Supported 00:15:08.246 NVMe-MI: Not Supported 00:15:08.246 Virtualization Management: Not Supported 00:15:08.246 Doorbell Buffer Config: Not Supported 00:15:08.246 Get LBA Status Capability: Not Supported 00:15:08.246 Command & Feature Lockdown Capability: Not Supported 00:15:08.246 Abort Command Limit: 4 00:15:08.246 Async Event Request Limit: 4 00:15:08.246 Number of Firmware Slots: N/A 00:15:08.246 Firmware Slot 1 Read-Only: N/A 00:15:08.246 Firmware Activation Without Reset: N/A 00:15:08.246 Multiple Update Detection Support: N/A 00:15:08.246 Firmware Update Granularity: No Information Provided 00:15:08.246 Per-Namespace SMART Log: No 00:15:08.246 Asymmetric Namespace Access Log Page: Not Supported 00:15:08.246 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:08.246 Command Effects Log Page: Supported 00:15:08.246 Get Log Page Extended Data: Supported 00:15:08.246 Telemetry Log Pages: Not Supported 00:15:08.246 Persistent Event Log Pages: Not Supported 00:15:08.246 Supported Log Pages Log Page: May Support 00:15:08.246 Commands Supported & Effects Log Page: Not Supported 00:15:08.246 Feature Identifiers & Effects Log Page:May Support 00:15:08.246 NVMe-MI Commands & Effects Log Page: May Support 00:15:08.246 Data Area 4 for Telemetry Log: Not Supported 00:15:08.246 Error Log Page Entries Supported: 128 00:15:08.246 Keep Alive: Supported 00:15:08.246 Keep Alive Granularity: 10000 ms 00:15:08.246 00:15:08.246 NVM Command Set Attributes 00:15:08.246 ========================== 00:15:08.246 Submission Queue Entry Size 00:15:08.246 Max: 64 00:15:08.246 Min: 64 00:15:08.246 Completion Queue Entry Size 00:15:08.246 Max: 16 00:15:08.246 Min: 16 00:15:08.246 Number of Namespaces: 32 00:15:08.246 Compare Command: Supported 00:15:08.246 Write Uncorrectable Command: Not Supported 00:15:08.246 Dataset Management Command: Supported 00:15:08.246 Write Zeroes Command: Supported 00:15:08.246 Set Features Save Field: Not Supported 00:15:08.246 Reservations: Not Supported 00:15:08.246 Timestamp: Not Supported 00:15:08.246 Copy: Supported 00:15:08.246 Volatile Write Cache: Present 00:15:08.246 Atomic Write Unit (Normal): 1 00:15:08.246 Atomic Write Unit (PFail): 1 00:15:08.246 Atomic Compare & Write Unit: 1 00:15:08.246 Fused Compare & Write: Supported 00:15:08.246 Scatter-Gather List 00:15:08.246 SGL Command Set: Supported (Dword aligned) 00:15:08.246 SGL Keyed: Not Supported 00:15:08.246 SGL Bit Bucket Descriptor: Not Supported 00:15:08.246 SGL Metadata Pointer: Not Supported 00:15:08.246 Oversized SGL: Not Supported 00:15:08.246 SGL Metadata Address: Not Supported 00:15:08.246 SGL Offset: Not Supported 00:15:08.246 Transport SGL Data Block: Not Supported 00:15:08.246 Replay Protected Memory Block: Not Supported 00:15:08.246 00:15:08.246 Firmware Slot Information 00:15:08.246 ========================= 00:15:08.246 Active slot: 1 00:15:08.246 Slot 1 Firmware Revision: 25.01 00:15:08.246 00:15:08.246 00:15:08.246 Commands Supported and Effects 00:15:08.246 ============================== 00:15:08.246 Admin Commands 00:15:08.246 -------------- 00:15:08.246 Get Log Page (02h): Supported 00:15:08.246 Identify (06h): Supported 00:15:08.246 Abort (08h): Supported 00:15:08.246 Set Features (09h): Supported 00:15:08.246 Get Features (0Ah): Supported 00:15:08.246 Asynchronous Event Request (0Ch): Supported 00:15:08.246 Keep Alive (18h): Supported 00:15:08.246 I/O Commands 00:15:08.246 ------------ 00:15:08.246 Flush (00h): Supported LBA-Change 00:15:08.246 Write (01h): Supported LBA-Change 00:15:08.246 Read (02h): Supported 00:15:08.246 Compare (05h): Supported 00:15:08.246 Write Zeroes (08h): Supported LBA-Change 00:15:08.246 Dataset Management (09h): Supported LBA-Change 00:15:08.246 Copy (19h): Supported LBA-Change 00:15:08.246 00:15:08.246 Error Log 00:15:08.246 ========= 00:15:08.246 00:15:08.246 Arbitration 00:15:08.246 =========== 00:15:08.246 Arbitration Burst: 1 00:15:08.246 00:15:08.246 Power Management 00:15:08.246 ================ 00:15:08.246 Number of Power States: 1 00:15:08.246 Current Power State: Power State #0 00:15:08.246 Power State #0: 00:15:08.246 Max Power: 0.00 W 00:15:08.246 Non-Operational State: Operational 00:15:08.246 Entry Latency: Not Reported 00:15:08.246 Exit Latency: Not Reported 00:15:08.246 Relative Read Throughput: 0 00:15:08.246 Relative Read Latency: 0 00:15:08.246 Relative Write Throughput: 0 00:15:08.246 Relative Write Latency: 0 00:15:08.246 Idle Power: Not Reported 00:15:08.246 Active Power: Not Reported 00:15:08.246 Non-Operational Permissive Mode: Not Supported 00:15:08.246 00:15:08.246 Health Information 00:15:08.246 ================== 00:15:08.246 Critical Warnings: 00:15:08.246 Available Spare Space: OK 00:15:08.246 Temperature: OK 00:15:08.246 Device Reliability: OK 00:15:08.246 Read Only: No 00:15:08.246 Volatile Memory Backup: OK 00:15:08.246 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:08.246 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:08.246 Available Spare: 0% 00:15:08.246 Available Sp[2024-11-28 08:13:50.382076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:08.246 [2024-11-28 08:13:50.386952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:08.246 [2024-11-28 08:13:50.386983] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:08.246 [2024-11-28 08:13:50.386992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.246 [2024-11-28 08:13:50.386999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.246 [2024-11-28 08:13:50.387004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.247 [2024-11-28 08:13:50.387010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.247 [2024-11-28 08:13:50.387088] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:08.247 [2024-11-28 08:13:50.387099] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:08.247 [2024-11-28 08:13:50.388087] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.247 [2024-11-28 08:13:50.388132] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:08.247 [2024-11-28 08:13:50.388138] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:08.247 [2024-11-28 08:13:50.389093] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:08.247 [2024-11-28 08:13:50.389104] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:08.247 [2024-11-28 08:13:50.389153] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:08.247 [2024-11-28 08:13:50.390139] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:08.247 are Threshold: 0% 00:15:08.247 Life Percentage Used: 0% 00:15:08.247 Data Units Read: 0 00:15:08.247 Data Units Written: 0 00:15:08.247 Host Read Commands: 0 00:15:08.247 Host Write Commands: 0 00:15:08.247 Controller Busy Time: 0 minutes 00:15:08.247 Power Cycles: 0 00:15:08.247 Power On Hours: 0 hours 00:15:08.247 Unsafe Shutdowns: 0 00:15:08.247 Unrecoverable Media Errors: 0 00:15:08.247 Lifetime Error Log Entries: 0 00:15:08.247 Warning Temperature Time: 0 minutes 00:15:08.247 Critical Temperature Time: 0 minutes 00:15:08.247 00:15:08.247 Number of Queues 00:15:08.247 ================ 00:15:08.247 Number of I/O Submission Queues: 127 00:15:08.247 Number of I/O Completion Queues: 127 00:15:08.247 00:15:08.247 Active Namespaces 00:15:08.247 ================= 00:15:08.247 Namespace ID:1 00:15:08.247 Error Recovery Timeout: Unlimited 00:15:08.247 Command Set Identifier: NVM (00h) 00:15:08.247 Deallocate: Supported 00:15:08.247 Deallocated/Unwritten Error: Not Supported 00:15:08.247 Deallocated Read Value: Unknown 00:15:08.247 Deallocate in Write Zeroes: Not Supported 00:15:08.247 Deallocated Guard Field: 0xFFFF 00:15:08.247 Flush: Supported 00:15:08.247 Reservation: Supported 00:15:08.247 Namespace Sharing Capabilities: Multiple Controllers 00:15:08.247 Size (in LBAs): 131072 (0GiB) 00:15:08.247 Capacity (in LBAs): 131072 (0GiB) 00:15:08.247 Utilization (in LBAs): 131072 (0GiB) 00:15:08.247 NGUID: DC683C294F4D49E19FDF3B45E0172C26 00:15:08.247 UUID: dc683c29-4f4d-49e1-9fdf-3b45e0172c26 00:15:08.247 Thin Provisioning: Not Supported 00:15:08.247 Per-NS Atomic Units: Yes 00:15:08.247 Atomic Boundary Size (Normal): 0 00:15:08.247 Atomic Boundary Size (PFail): 0 00:15:08.247 Atomic Boundary Offset: 0 00:15:08.247 Maximum Single Source Range Length: 65535 00:15:08.247 Maximum Copy Length: 65535 00:15:08.247 Maximum Source Range Count: 1 00:15:08.247 NGUID/EUI64 Never Reused: No 00:15:08.247 Namespace Write Protected: No 00:15:08.247 Number of LBA Formats: 1 00:15:08.247 Current LBA Format: LBA Format #00 00:15:08.247 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:08.247 00:15:08.247 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:08.506 [2024-11-28 08:13:50.617363] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:13.782 Initializing NVMe Controllers 00:15:13.782 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:13.782 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:13.782 Initialization complete. Launching workers. 00:15:13.782 ======================================================== 00:15:13.782 Latency(us) 00:15:13.782 Device Information : IOPS MiB/s Average min max 00:15:13.782 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39879.16 155.78 3210.05 1014.25 10572.25 00:15:13.782 ======================================================== 00:15:13.782 Total : 39879.16 155.78 3210.05 1014.25 10572.25 00:15:13.782 00:15:13.782 [2024-11-28 08:13:55.725226] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.782 08:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:13.782 [2024-11-28 08:13:55.965862] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.056 Initializing NVMe Controllers 00:15:19.056 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.056 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:19.056 Initialization complete. Launching workers. 00:15:19.056 ======================================================== 00:15:19.056 Latency(us) 00:15:19.056 Device Information : IOPS MiB/s Average min max 00:15:19.056 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39905.57 155.88 3207.77 1010.41 7569.84 00:15:19.056 ======================================================== 00:15:19.056 Total : 39905.57 155.88 3207.77 1010.41 7569.84 00:15:19.056 00:15:19.056 [2024-11-28 08:14:00.987008] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.056 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:19.056 [2024-11-28 08:14:01.192438] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.329 [2024-11-28 08:14:06.332040] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.329 Initializing NVMe Controllers 00:15:24.329 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.329 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:24.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:24.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:24.329 Initialization complete. Launching workers. 00:15:24.329 Starting thread on core 2 00:15:24.329 Starting thread on core 3 00:15:24.329 Starting thread on core 1 00:15:24.329 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:24.588 [2024-11-28 08:14:06.630105] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.881 [2024-11-28 08:14:09.703629] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.881 Initializing NVMe Controllers 00:15:27.881 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.881 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.881 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:27.881 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:27.881 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:27.881 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:27.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:27.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:27.881 Initialization complete. Launching workers. 00:15:27.881 Starting thread on core 1 with urgent priority queue 00:15:27.881 Starting thread on core 2 with urgent priority queue 00:15:27.881 Starting thread on core 3 with urgent priority queue 00:15:27.881 Starting thread on core 0 with urgent priority queue 00:15:27.881 SPDK bdev Controller (SPDK2 ) core 0: 9516.33 IO/s 10.51 secs/100000 ios 00:15:27.881 SPDK bdev Controller (SPDK2 ) core 1: 8869.33 IO/s 11.27 secs/100000 ios 00:15:27.881 SPDK bdev Controller (SPDK2 ) core 2: 8137.00 IO/s 12.29 secs/100000 ios 00:15:27.881 SPDK bdev Controller (SPDK2 ) core 3: 9642.33 IO/s 10.37 secs/100000 ios 00:15:27.881 ======================================================== 00:15:27.881 00:15:27.881 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:27.881 [2024-11-28 08:14:09.994378] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.881 Initializing NVMe Controllers 00:15:27.881 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.881 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.881 Namespace ID: 1 size: 0GB 00:15:27.881 Initialization complete. 00:15:27.881 INFO: using host memory buffer for IO 00:15:27.881 Hello world! 00:15:27.881 [2024-11-28 08:14:10.004441] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.881 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:28.140 [2024-11-28 08:14:10.292513] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.518 Initializing NVMe Controllers 00:15:29.518 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.518 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.518 Initialization complete. Launching workers. 00:15:29.518 submit (in ns) avg, min, max = 6203.7, 3204.3, 3999957.4 00:15:29.518 complete (in ns) avg, min, max = 22030.0, 1752.2, 3998887.8 00:15:29.518 00:15:29.518 Submit histogram 00:15:29.518 ================ 00:15:29.518 Range in us Cumulative Count 00:15:29.518 3.200 - 3.214: 0.0125% ( 2) 00:15:29.518 3.214 - 3.228: 0.0311% ( 3) 00:15:29.518 3.228 - 3.242: 0.0747% ( 7) 00:15:29.518 3.242 - 3.256: 0.1246% ( 8) 00:15:29.518 3.256 - 3.270: 0.1993% ( 12) 00:15:29.518 3.270 - 3.283: 0.3675% ( 27) 00:15:29.518 3.283 - 3.297: 1.4450% ( 173) 00:15:29.518 3.297 - 3.311: 5.1261% ( 591) 00:15:29.518 3.311 - 3.325: 10.6011% ( 879) 00:15:29.518 3.325 - 3.339: 17.1411% ( 1050) 00:15:29.518 3.339 - 3.353: 23.6998% ( 1053) 00:15:29.518 3.353 - 3.367: 29.8225% ( 983) 00:15:29.518 3.367 - 3.381: 34.8926% ( 814) 00:15:29.518 3.381 - 3.395: 39.8941% ( 803) 00:15:29.518 3.395 - 3.409: 44.5406% ( 746) 00:15:29.518 3.409 - 3.423: 48.5207% ( 639) 00:15:29.518 3.423 - 3.437: 52.4634% ( 633) 00:15:29.518 3.437 - 3.450: 58.3619% ( 947) 00:15:29.518 3.450 - 3.464: 65.8985% ( 1210) 00:15:29.518 3.464 - 3.478: 70.1651% ( 685) 00:15:29.518 3.478 - 3.492: 74.6496% ( 720) 00:15:29.518 3.492 - 3.506: 79.7882% ( 825) 00:15:29.518 3.506 - 3.520: 83.3448% ( 571) 00:15:29.518 3.520 - 3.534: 85.5684% ( 357) 00:15:29.518 3.534 - 3.548: 86.7331% ( 187) 00:15:29.518 3.548 - 3.562: 87.3061% ( 92) 00:15:29.518 3.562 - 3.590: 88.0037% ( 112) 00:15:29.518 3.590 - 3.617: 89.2557% ( 201) 00:15:29.518 3.617 - 3.645: 91.0184% ( 283) 00:15:29.518 3.645 - 3.673: 92.6939% ( 269) 00:15:29.518 3.673 - 3.701: 94.3195% ( 261) 00:15:29.518 3.701 - 3.729: 95.9950% ( 269) 00:15:29.518 3.729 - 3.757: 97.4712% ( 237) 00:15:29.518 3.757 - 3.784: 98.4055% ( 150) 00:15:29.518 3.784 - 3.812: 99.0159% ( 98) 00:15:29.518 3.812 - 3.840: 99.2962% ( 45) 00:15:29.518 3.840 - 3.868: 99.5453% ( 40) 00:15:29.518 3.868 - 3.896: 99.5827% ( 6) 00:15:29.518 3.896 - 3.923: 99.6076% ( 4) 00:15:29.518 3.923 - 3.951: 99.6138% ( 1) 00:15:29.518 4.925 - 4.953: 99.6201% ( 1) 00:15:29.518 5.037 - 5.064: 99.6263% ( 1) 00:15:29.518 5.064 - 5.092: 99.6325% ( 1) 00:15:29.518 5.120 - 5.148: 99.6387% ( 1) 00:15:29.518 5.287 - 5.315: 99.6450% ( 1) 00:15:29.518 5.315 - 5.343: 99.6512% ( 1) 00:15:29.519 5.454 - 5.482: 99.6637% ( 2) 00:15:29.519 5.482 - 5.510: 99.6699% ( 1) 00:15:29.519 5.510 - 5.537: 99.6823% ( 2) 00:15:29.519 5.565 - 5.593: 99.6948% ( 2) 00:15:29.519 5.593 - 5.621: 99.7010% ( 1) 00:15:29.519 5.788 - 5.816: 99.7135% ( 2) 00:15:29.519 5.816 - 5.843: 99.7259% ( 2) 00:15:29.519 5.843 - 5.871: 99.7322% ( 1) 00:15:29.519 5.899 - 5.927: 99.7384% ( 1) 00:15:29.519 6.010 - 6.038: 99.7446% ( 1) 00:15:29.519 6.038 - 6.066: 99.7571% ( 2) 00:15:29.519 6.150 - 6.177: 99.7633% ( 1) 00:15:29.519 6.205 - 6.233: 99.7695% ( 1) 00:15:29.519 6.233 - 6.261: 99.7758% ( 1) 00:15:29.519 6.289 - 6.317: 99.7820% ( 1) 00:15:29.519 6.317 - 6.344: 99.7882% ( 1) 00:15:29.519 6.372 - 6.400: 99.7945% ( 1) 00:15:29.519 6.428 - 6.456: 99.8007% ( 1) 00:15:29.519 6.456 - 6.483: 99.8069% ( 1) 00:15:29.519 6.483 - 6.511: 99.8131% ( 1) 00:15:29.519 6.595 - 6.623: 99.8194% ( 1) 00:15:29.519 6.706 - 6.734: 99.8256% ( 1) 00:15:29.519 6.734 - 6.762: 99.8318% ( 1) 00:15:29.519 6.845 - 6.873: 99.8381% ( 1) 00:15:29.519 6.901 - 6.929: 99.8443% ( 1) 00:15:29.519 7.012 - 7.040: 99.8505% ( 1) 00:15:29.519 7.123 - 7.179: 99.8567% ( 1) 00:15:29.519 7.179 - 7.235: 99.8630% ( 1) 00:15:29.519 7.290 - 7.346: 99.8692% ( 1) 00:15:29.519 7.346 - 7.402: 99.8817% ( 2) 00:15:29.519 7.402 - 7.457: 99.8879% ( 1) 00:15:29.519 8.070 - 8.125: 99.8941% ( 1) 00:15:29.519 8.570 - 8.626: 99.9003% ( 1) 00:15:29.519 [2024-11-28 08:14:11.388010] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.519 10.129 - 10.184: 99.9066% ( 1) 00:15:29.519 13.802 - 13.857: 99.9128% ( 1) 00:15:29.519 15.360 - 15.471: 99.9190% ( 1) 00:15:29.519 26.713 - 26.824: 99.9253% ( 1) 00:15:29.519 195.005 - 195.896: 99.9315% ( 1) 00:15:29.519 3989.148 - 4017.642: 100.0000% ( 11) 00:15:29.519 00:15:29.519 Complete histogram 00:15:29.519 ================== 00:15:29.519 Range in us Cumulative Count 00:15:29.519 1.746 - 1.753: 0.0062% ( 1) 00:15:29.519 1.760 - 1.767: 0.0934% ( 14) 00:15:29.519 1.767 - 1.774: 0.1931% ( 16) 00:15:29.519 1.774 - 1.781: 0.3426% ( 24) 00:15:29.519 1.781 - 1.795: 0.4235% ( 13) 00:15:29.519 1.795 - 1.809: 0.5045% ( 13) 00:15:29.519 1.809 - 1.823: 8.4024% ( 1268) 00:15:29.519 1.823 - 1.837: 37.2781% ( 4636) 00:15:29.519 1.837 - 1.850: 45.3504% ( 1296) 00:15:29.519 1.850 - 1.864: 48.6951% ( 537) 00:15:29.519 1.864 - 1.878: 69.4861% ( 3338) 00:15:29.519 1.878 - 1.892: 92.3637% ( 3673) 00:15:29.519 1.892 - 1.906: 96.5556% ( 673) 00:15:29.519 1.906 - 1.920: 98.0629% ( 242) 00:15:29.519 1.920 - 1.934: 98.4366% ( 60) 00:15:29.519 1.934 - 1.948: 98.7418% ( 49) 00:15:29.519 1.948 - 1.962: 99.0221% ( 45) 00:15:29.519 1.962 - 1.976: 99.2214% ( 32) 00:15:29.519 1.976 - 1.990: 99.2775% ( 9) 00:15:29.519 1.990 - 2.003: 99.2837% ( 1) 00:15:29.519 2.017 - 2.031: 99.2899% ( 1) 00:15:29.519 2.045 - 2.059: 99.2962% ( 1) 00:15:29.519 2.059 - 2.073: 99.3024% ( 1) 00:15:29.519 2.087 - 2.101: 99.3086% ( 1) 00:15:29.519 3.464 - 3.478: 99.3149% ( 1) 00:15:29.519 3.478 - 3.492: 99.3211% ( 1) 00:15:29.519 3.492 - 3.506: 99.3273% ( 1) 00:15:29.519 3.562 - 3.590: 99.3335% ( 1) 00:15:29.519 3.590 - 3.617: 99.3398% ( 1) 00:15:29.519 3.673 - 3.701: 99.3460% ( 1) 00:15:29.519 3.729 - 3.757: 99.3585% ( 2) 00:15:29.519 4.063 - 4.090: 99.3647% ( 1) 00:15:29.519 4.146 - 4.174: 99.3709% ( 1) 00:15:29.519 4.480 - 4.508: 99.3771% ( 1) 00:15:29.519 4.508 - 4.536: 99.3834% ( 1) 00:15:29.519 4.591 - 4.619: 99.3896% ( 1) 00:15:29.519 4.758 - 4.786: 99.3958% ( 1) 00:15:29.519 4.786 - 4.814: 99.4021% ( 1) 00:15:29.519 4.842 - 4.870: 99.4083% ( 1) 00:15:29.519 5.120 - 5.148: 99.4145% ( 1) 00:15:29.519 5.148 - 5.176: 99.4207% ( 1) 00:15:29.519 5.203 - 5.231: 99.4270% ( 1) 00:15:29.519 5.398 - 5.426: 99.4332% ( 1) 00:15:29.519 5.816 - 5.843: 99.4394% ( 1) 00:15:29.519 6.066 - 6.094: 99.4457% ( 1) 00:15:29.519 6.595 - 6.623: 99.4519% ( 1) 00:15:29.519 8.014 - 8.070: 99.4581% ( 1) 00:15:29.519 10.129 - 10.184: 99.4643% ( 1) 00:15:29.519 13.134 - 13.190: 99.4706% ( 1) 00:15:29.519 17.697 - 17.809: 99.4768% ( 1) 00:15:29.519 39.624 - 39.847: 99.4830% ( 1) 00:15:29.519 49.642 - 49.864: 99.4893% ( 1) 00:15:29.519 146.031 - 146.922: 99.4955% ( 1) 00:15:29.519 3989.148 - 4017.642: 100.0000% ( 81) 00:15:29.519 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:29.519 [ 00:15:29.519 { 00:15:29.519 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:29.519 "subtype": "Discovery", 00:15:29.519 "listen_addresses": [], 00:15:29.519 "allow_any_host": true, 00:15:29.519 "hosts": [] 00:15:29.519 }, 00:15:29.519 { 00:15:29.519 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:29.519 "subtype": "NVMe", 00:15:29.519 "listen_addresses": [ 00:15:29.519 { 00:15:29.519 "trtype": "VFIOUSER", 00:15:29.519 "adrfam": "IPv4", 00:15:29.519 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:29.519 "trsvcid": "0" 00:15:29.519 } 00:15:29.519 ], 00:15:29.519 "allow_any_host": true, 00:15:29.519 "hosts": [], 00:15:29.519 "serial_number": "SPDK1", 00:15:29.519 "model_number": "SPDK bdev Controller", 00:15:29.519 "max_namespaces": 32, 00:15:29.519 "min_cntlid": 1, 00:15:29.519 "max_cntlid": 65519, 00:15:29.519 "namespaces": [ 00:15:29.519 { 00:15:29.519 "nsid": 1, 00:15:29.519 "bdev_name": "Malloc1", 00:15:29.519 "name": "Malloc1", 00:15:29.519 "nguid": "0145181954CB4F079C76E02943BB9D3C", 00:15:29.519 "uuid": "01451819-54cb-4f07-9c76-e02943bb9d3c" 00:15:29.519 }, 00:15:29.519 { 00:15:29.519 "nsid": 2, 00:15:29.519 "bdev_name": "Malloc3", 00:15:29.519 "name": "Malloc3", 00:15:29.519 "nguid": "6D456BCBBABC47DEB2478A64A3273629", 00:15:29.519 "uuid": "6d456bcb-babc-47de-b247-8a64a3273629" 00:15:29.519 } 00:15:29.519 ] 00:15:29.519 }, 00:15:29.519 { 00:15:29.519 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:29.519 "subtype": "NVMe", 00:15:29.519 "listen_addresses": [ 00:15:29.519 { 00:15:29.519 "trtype": "VFIOUSER", 00:15:29.519 "adrfam": "IPv4", 00:15:29.519 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:29.519 "trsvcid": "0" 00:15:29.519 } 00:15:29.519 ], 00:15:29.519 "allow_any_host": true, 00:15:29.519 "hosts": [], 00:15:29.519 "serial_number": "SPDK2", 00:15:29.519 "model_number": "SPDK bdev Controller", 00:15:29.519 "max_namespaces": 32, 00:15:29.519 "min_cntlid": 1, 00:15:29.519 "max_cntlid": 65519, 00:15:29.519 "namespaces": [ 00:15:29.519 { 00:15:29.519 "nsid": 1, 00:15:29.519 "bdev_name": "Malloc2", 00:15:29.519 "name": "Malloc2", 00:15:29.519 "nguid": "DC683C294F4D49E19FDF3B45E0172C26", 00:15:29.519 "uuid": "dc683c29-4f4d-49e1-9fdf-3b45e0172c26" 00:15:29.519 } 00:15:29.519 ] 00:15:29.519 } 00:15:29.519 ] 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1331880 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:29.519 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:29.779 [2024-11-28 08:14:11.785365] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.779 Malloc4 00:15:29.779 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:29.779 [2024-11-28 08:14:12.035276] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:30.049 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:30.049 Asynchronous Event Request test 00:15:30.049 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:30.049 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:30.049 Registering asynchronous event callbacks... 00:15:30.049 Starting namespace attribute notice tests for all controllers... 00:15:30.049 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:30.049 aer_cb - Changed Namespace 00:15:30.049 Cleaning up... 00:15:30.049 [ 00:15:30.049 { 00:15:30.049 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:30.049 "subtype": "Discovery", 00:15:30.049 "listen_addresses": [], 00:15:30.049 "allow_any_host": true, 00:15:30.049 "hosts": [] 00:15:30.049 }, 00:15:30.049 { 00:15:30.049 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:30.049 "subtype": "NVMe", 00:15:30.049 "listen_addresses": [ 00:15:30.049 { 00:15:30.049 "trtype": "VFIOUSER", 00:15:30.049 "adrfam": "IPv4", 00:15:30.049 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:30.049 "trsvcid": "0" 00:15:30.049 } 00:15:30.049 ], 00:15:30.049 "allow_any_host": true, 00:15:30.049 "hosts": [], 00:15:30.049 "serial_number": "SPDK1", 00:15:30.049 "model_number": "SPDK bdev Controller", 00:15:30.049 "max_namespaces": 32, 00:15:30.049 "min_cntlid": 1, 00:15:30.049 "max_cntlid": 65519, 00:15:30.049 "namespaces": [ 00:15:30.049 { 00:15:30.049 "nsid": 1, 00:15:30.049 "bdev_name": "Malloc1", 00:15:30.049 "name": "Malloc1", 00:15:30.049 "nguid": "0145181954CB4F079C76E02943BB9D3C", 00:15:30.049 "uuid": "01451819-54cb-4f07-9c76-e02943bb9d3c" 00:15:30.049 }, 00:15:30.049 { 00:15:30.049 "nsid": 2, 00:15:30.049 "bdev_name": "Malloc3", 00:15:30.049 "name": "Malloc3", 00:15:30.049 "nguid": "6D456BCBBABC47DEB2478A64A3273629", 00:15:30.049 "uuid": "6d456bcb-babc-47de-b247-8a64a3273629" 00:15:30.049 } 00:15:30.049 ] 00:15:30.049 }, 00:15:30.049 { 00:15:30.049 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:30.049 "subtype": "NVMe", 00:15:30.049 "listen_addresses": [ 00:15:30.049 { 00:15:30.049 "trtype": "VFIOUSER", 00:15:30.049 "adrfam": "IPv4", 00:15:30.049 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:30.049 "trsvcid": "0" 00:15:30.049 } 00:15:30.049 ], 00:15:30.049 "allow_any_host": true, 00:15:30.049 "hosts": [], 00:15:30.049 "serial_number": "SPDK2", 00:15:30.049 "model_number": "SPDK bdev Controller", 00:15:30.049 "max_namespaces": 32, 00:15:30.049 "min_cntlid": 1, 00:15:30.049 "max_cntlid": 65519, 00:15:30.049 "namespaces": [ 00:15:30.049 { 00:15:30.049 "nsid": 1, 00:15:30.049 "bdev_name": "Malloc2", 00:15:30.049 "name": "Malloc2", 00:15:30.049 "nguid": "DC683C294F4D49E19FDF3B45E0172C26", 00:15:30.049 "uuid": "dc683c29-4f4d-49e1-9fdf-3b45e0172c26" 00:15:30.049 }, 00:15:30.049 { 00:15:30.049 "nsid": 2, 00:15:30.049 "bdev_name": "Malloc4", 00:15:30.049 "name": "Malloc4", 00:15:30.049 "nguid": "579657E939754968AB63B2AEAA22110F", 00:15:30.049 "uuid": "579657e9-3975-4968-ab63-b2aeaa22110f" 00:15:30.049 } 00:15:30.049 ] 00:15:30.049 } 00:15:30.049 ] 00:15:30.049 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1331880 00:15:30.049 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:30.049 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1324262 00:15:30.049 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1324262 ']' 00:15:30.049 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1324262 00:15:30.049 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:30.049 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.049 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1324262 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1324262' 00:15:30.309 killing process with pid 1324262 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1324262 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1324262 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1332112 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1332112' 00:15:30.309 Process pid: 1332112 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1332112 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1332112 ']' 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:30.569 [2024-11-28 08:14:12.617890] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:30.569 [2024-11-28 08:14:12.618802] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:15:30.569 [2024-11-28 08:14:12.618843] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.569 [2024-11-28 08:14:12.681099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.569 [2024-11-28 08:14:12.718824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.569 [2024-11-28 08:14:12.718863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.569 [2024-11-28 08:14:12.718871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.569 [2024-11-28 08:14:12.718877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.569 [2024-11-28 08:14:12.718882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.569 [2024-11-28 08:14:12.720398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.569 [2024-11-28 08:14:12.720494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.569 [2024-11-28 08:14:12.720582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.569 [2024-11-28 08:14:12.720584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.569 [2024-11-28 08:14:12.789708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:30.569 [2024-11-28 08:14:12.789823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:30.569 [2024-11-28 08:14:12.790044] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:30.569 [2024-11-28 08:14:12.790307] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:30.569 [2024-11-28 08:14:12.790487] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:30.569 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.569 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:30.569 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:31.950 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:31.950 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:31.950 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:31.950 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:31.950 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:31.950 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:31.950 Malloc1 00:15:32.209 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:32.209 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:32.468 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:32.726 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:32.726 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:32.726 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:32.986 Malloc2 00:15:32.986 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:33.244 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:33.244 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1332112 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1332112 ']' 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1332112 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1332112 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1332112' 00:15:33.503 killing process with pid 1332112 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1332112 00:15:33.503 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1332112 00:15:33.762 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:33.762 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:33.762 00:15:33.762 real 0m50.839s 00:15:33.762 user 3m16.827s 00:15:33.762 sys 0m3.271s 00:15:33.762 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.762 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:33.762 ************************************ 00:15:33.762 END TEST nvmf_vfio_user 00:15:33.762 ************************************ 00:15:33.762 08:14:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:33.762 08:14:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:33.762 08:14:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.762 08:14:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.762 ************************************ 00:15:33.762 START TEST nvmf_vfio_user_nvme_compliance 00:15:33.762 ************************************ 00:15:33.762 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:34.022 * Looking for test storage... 00:15:34.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.022 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:34.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.023 --rc genhtml_branch_coverage=1 00:15:34.023 --rc genhtml_function_coverage=1 00:15:34.023 --rc genhtml_legend=1 00:15:34.023 --rc geninfo_all_blocks=1 00:15:34.023 --rc geninfo_unexecuted_blocks=1 00:15:34.023 00:15:34.023 ' 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:34.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.023 --rc genhtml_branch_coverage=1 00:15:34.023 --rc genhtml_function_coverage=1 00:15:34.023 --rc genhtml_legend=1 00:15:34.023 --rc geninfo_all_blocks=1 00:15:34.023 --rc geninfo_unexecuted_blocks=1 00:15:34.023 00:15:34.023 ' 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:34.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.023 --rc genhtml_branch_coverage=1 00:15:34.023 --rc genhtml_function_coverage=1 00:15:34.023 --rc genhtml_legend=1 00:15:34.023 --rc geninfo_all_blocks=1 00:15:34.023 --rc geninfo_unexecuted_blocks=1 00:15:34.023 00:15:34.023 ' 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:34.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.023 --rc genhtml_branch_coverage=1 00:15:34.023 --rc genhtml_function_coverage=1 00:15:34.023 --rc genhtml_legend=1 00:15:34.023 --rc geninfo_all_blocks=1 00:15:34.023 --rc geninfo_unexecuted_blocks=1 00:15:34.023 00:15:34.023 ' 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1332679 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1332679' 00:15:34.023 Process pid: 1332679 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1332679 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1332679 ']' 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.023 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:34.023 [2024-11-28 08:14:16.219233] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:15:34.024 [2024-11-28 08:14:16.219282] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.024 [2024-11-28 08:14:16.283473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:34.283 [2024-11-28 08:14:16.324013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.283 [2024-11-28 08:14:16.324053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.284 [2024-11-28 08:14:16.324060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.284 [2024-11-28 08:14:16.324066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.284 [2024-11-28 08:14:16.324071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.284 [2024-11-28 08:14:16.325377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.284 [2024-11-28 08:14:16.325471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.284 [2024-11-28 08:14:16.325473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.284 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.284 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:34.284 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.221 malloc0 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.221 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.480 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.480 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:35.480 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.480 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.480 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.480 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:35.480 00:15:35.480 00:15:35.480 CUnit - A unit testing framework for C - Version 2.1-3 00:15:35.480 http://cunit.sourceforge.net/ 00:15:35.480 00:15:35.480 00:15:35.480 Suite: nvme_compliance 00:15:35.480 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-28 08:14:17.661757] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.480 [2024-11-28 08:14:17.663146] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:35.480 [2024-11-28 08:14:17.663168] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:35.480 [2024-11-28 08:14:17.663177] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:35.480 [2024-11-28 08:14:17.664784] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.480 passed 00:15:35.480 Test: admin_identify_ctrlr_verify_fused ...[2024-11-28 08:14:17.742320] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.480 [2024-11-28 08:14:17.747348] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.739 passed 00:15:35.739 Test: admin_identify_ns ...[2024-11-28 08:14:17.825423] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.739 [2024-11-28 08:14:17.888958] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:35.739 [2024-11-28 08:14:17.896962] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:35.739 [2024-11-28 08:14:17.918047] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.739 passed 00:15:35.739 Test: admin_get_features_mandatory_features ...[2024-11-28 08:14:17.992248] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.739 [2024-11-28 08:14:17.995273] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.997 passed 00:15:35.997 Test: admin_get_features_optional_features ...[2024-11-28 08:14:18.073783] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.997 [2024-11-28 08:14:18.078815] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.998 passed 00:15:35.998 Test: admin_set_features_number_of_queues ...[2024-11-28 08:14:18.157404] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.998 [2024-11-28 08:14:18.263030] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.256 passed 00:15:36.256 Test: admin_get_log_page_mandatory_logs ...[2024-11-28 08:14:18.340216] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.256 [2024-11-28 08:14:18.343242] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.256 passed 00:15:36.256 Test: admin_get_log_page_with_lpo ...[2024-11-28 08:14:18.419428] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.256 [2024-11-28 08:14:18.488956] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:36.256 [2024-11-28 08:14:18.502014] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.514 passed 00:15:36.515 Test: fabric_property_get ...[2024-11-28 08:14:18.576175] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.515 [2024-11-28 08:14:18.577415] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:36.515 [2024-11-28 08:14:18.579198] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.515 passed 00:15:36.515 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-28 08:14:18.660723] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.515 [2024-11-28 08:14:18.661966] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:36.515 [2024-11-28 08:14:18.663745] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.515 passed 00:15:36.515 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-28 08:14:18.742388] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.773 [2024-11-28 08:14:18.826953] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:36.773 [2024-11-28 08:14:18.842956] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:36.773 [2024-11-28 08:14:18.848049] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.773 passed 00:15:36.773 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-28 08:14:18.922099] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.773 [2024-11-28 08:14:18.923336] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:36.774 [2024-11-28 08:14:18.925122] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.774 passed 00:15:36.774 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-28 08:14:19.003136] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.032 [2024-11-28 08:14:19.081958] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:37.032 [2024-11-28 08:14:19.105954] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:37.032 [2024-11-28 08:14:19.111042] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.032 passed 00:15:37.032 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-28 08:14:19.187123] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.032 [2024-11-28 08:14:19.188353] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:37.032 [2024-11-28 08:14:19.188378] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:37.032 [2024-11-28 08:14:19.190144] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.032 passed 00:15:37.032 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-28 08:14:19.268461] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.292 [2024-11-28 08:14:19.360953] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:37.292 [2024-11-28 08:14:19.368957] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:37.292 [2024-11-28 08:14:19.376955] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:37.292 [2024-11-28 08:14:19.384955] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:37.292 [2024-11-28 08:14:19.414035] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.292 passed 00:15:37.292 Test: admin_create_io_sq_verify_pc ...[2024-11-28 08:14:19.493002] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.292 [2024-11-28 08:14:19.509963] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:37.292 [2024-11-28 08:14:19.527192] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.292 passed 00:15:37.551 Test: admin_create_io_qp_max_qps ...[2024-11-28 08:14:19.606746] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.487 [2024-11-28 08:14:20.708959] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:39.055 [2024-11-28 08:14:21.072174] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.055 passed 00:15:39.055 Test: admin_create_io_sq_shared_cq ...[2024-11-28 08:14:21.149315] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.055 [2024-11-28 08:14:21.281955] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:39.055 [2024-11-28 08:14:21.319008] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.313 passed 00:15:39.313 00:15:39.313 Run Summary: Type Total Ran Passed Failed Inactive 00:15:39.313 suites 1 1 n/a 0 0 00:15:39.313 tests 18 18 18 0 0 00:15:39.313 asserts 360 360 360 0 n/a 00:15:39.313 00:15:39.313 Elapsed time = 1.501 seconds 00:15:39.313 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1332679 00:15:39.314 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1332679 ']' 00:15:39.314 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1332679 00:15:39.314 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:39.314 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.314 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1332679 00:15:39.314 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.314 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.314 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1332679' 00:15:39.314 killing process with pid 1332679 00:15:39.314 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1332679 00:15:39.314 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1332679 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:39.576 00:15:39.576 real 0m5.633s 00:15:39.576 user 0m15.770s 00:15:39.576 sys 0m0.519s 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.576 ************************************ 00:15:39.576 END TEST nvmf_vfio_user_nvme_compliance 00:15:39.576 ************************************ 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.576 ************************************ 00:15:39.576 START TEST nvmf_vfio_user_fuzz 00:15:39.576 ************************************ 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:39.576 * Looking for test storage... 00:15:39.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:39.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.576 --rc genhtml_branch_coverage=1 00:15:39.576 --rc genhtml_function_coverage=1 00:15:39.576 --rc genhtml_legend=1 00:15:39.576 --rc geninfo_all_blocks=1 00:15:39.576 --rc geninfo_unexecuted_blocks=1 00:15:39.576 00:15:39.576 ' 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:39.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.576 --rc genhtml_branch_coverage=1 00:15:39.576 --rc genhtml_function_coverage=1 00:15:39.576 --rc genhtml_legend=1 00:15:39.576 --rc geninfo_all_blocks=1 00:15:39.576 --rc geninfo_unexecuted_blocks=1 00:15:39.576 00:15:39.576 ' 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:39.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.576 --rc genhtml_branch_coverage=1 00:15:39.576 --rc genhtml_function_coverage=1 00:15:39.576 --rc genhtml_legend=1 00:15:39.576 --rc geninfo_all_blocks=1 00:15:39.576 --rc geninfo_unexecuted_blocks=1 00:15:39.576 00:15:39.576 ' 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:39.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.576 --rc genhtml_branch_coverage=1 00:15:39.576 --rc genhtml_function_coverage=1 00:15:39.576 --rc genhtml_legend=1 00:15:39.576 --rc geninfo_all_blocks=1 00:15:39.576 --rc geninfo_unexecuted_blocks=1 00:15:39.576 00:15:39.576 ' 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.576 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.841 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.841 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.841 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.841 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.841 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.841 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.841 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.841 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.841 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.841 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1333716 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1333716' 00:15:39.842 Process pid: 1333716 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1333716 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1333716 ']' 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.842 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:40.101 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.101 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:40.101 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.064 malloc0 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:41.064 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:13.150 Fuzzing completed. Shutting down the fuzz application 00:16:13.150 00:16:13.150 Dumping successful admin opcodes: 00:16:13.150 9, 10, 00:16:13.150 Dumping successful io opcodes: 00:16:13.150 0, 00:16:13.150 NS: 0x20000081ef00 I/O qp, Total commands completed: 996894, total successful commands: 3901, random_seed: 2801688000 00:16:13.151 NS: 0x20000081ef00 admin qp, Total commands completed: 245456, total successful commands: 57, random_seed: 3787005568 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1333716 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1333716 ']' 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1333716 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1333716 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1333716' 00:16:13.151 killing process with pid 1333716 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1333716 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1333716 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:13.151 00:16:13.151 real 0m32.212s 00:16:13.151 user 0m29.214s 00:16:13.151 sys 0m31.757s 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:13.151 ************************************ 00:16:13.151 END TEST nvmf_vfio_user_fuzz 00:16:13.151 ************************************ 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.151 ************************************ 00:16:13.151 START TEST nvmf_auth_target 00:16:13.151 ************************************ 00:16:13.151 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:13.151 * Looking for test storage... 00:16:13.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:13.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.151 --rc genhtml_branch_coverage=1 00:16:13.151 --rc genhtml_function_coverage=1 00:16:13.151 --rc genhtml_legend=1 00:16:13.151 --rc geninfo_all_blocks=1 00:16:13.151 --rc geninfo_unexecuted_blocks=1 00:16:13.151 00:16:13.151 ' 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:13.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.151 --rc genhtml_branch_coverage=1 00:16:13.151 --rc genhtml_function_coverage=1 00:16:13.151 --rc genhtml_legend=1 00:16:13.151 --rc geninfo_all_blocks=1 00:16:13.151 --rc geninfo_unexecuted_blocks=1 00:16:13.151 00:16:13.151 ' 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:13.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.151 --rc genhtml_branch_coverage=1 00:16:13.151 --rc genhtml_function_coverage=1 00:16:13.151 --rc genhtml_legend=1 00:16:13.151 --rc geninfo_all_blocks=1 00:16:13.151 --rc geninfo_unexecuted_blocks=1 00:16:13.151 00:16:13.151 ' 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:13.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.151 --rc genhtml_branch_coverage=1 00:16:13.151 --rc genhtml_function_coverage=1 00:16:13.151 --rc genhtml_legend=1 00:16:13.151 --rc geninfo_all_blocks=1 00:16:13.151 --rc geninfo_unexecuted_blocks=1 00:16:13.151 00:16:13.151 ' 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.151 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:13.152 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:17.346 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:17.346 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:17.346 Found net devices under 0000:86:00.0: cvl_0_0 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:17.346 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:17.347 Found net devices under 0000:86:00.1: cvl_0_1 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:17.347 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:17.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:16:17.606 00:16:17.606 --- 10.0.0.2 ping statistics --- 00:16:17.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.606 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:16:17.606 00:16:17.606 --- 10.0.0.1 ping statistics --- 00:16:17.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.606 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1342174 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1342174 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1342174 ']' 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.606 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1342209 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a3ed670fae1345c6553307a0dde0743b20ba59c350f69f91 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.w7p 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a3ed670fae1345c6553307a0dde0743b20ba59c350f69f91 0 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a3ed670fae1345c6553307a0dde0743b20ba59c350f69f91 0 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a3ed670fae1345c6553307a0dde0743b20ba59c350f69f91 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.w7p 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.w7p 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.w7p 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c6007afa5d6fc1d1b139129ed90083668f1867d9f47ec2698b9cd7d99523b4a7 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PWA 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c6007afa5d6fc1d1b139129ed90083668f1867d9f47ec2698b9cd7d99523b4a7 3 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c6007afa5d6fc1d1b139129ed90083668f1867d9f47ec2698b9cd7d99523b4a7 3 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c6007afa5d6fc1d1b139129ed90083668f1867d9f47ec2698b9cd7d99523b4a7 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:17.865 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:18.124 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PWA 00:16:18.124 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PWA 00:16:18.124 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.PWA 00:16:18.124 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:18.124 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ba9264e8d5e520fd3a68d155f7982fb0 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.I28 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ba9264e8d5e520fd3a68d155f7982fb0 1 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ba9264e8d5e520fd3a68d155f7982fb0 1 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ba9264e8d5e520fd3a68d155f7982fb0 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.I28 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.I28 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.I28 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e5233688cead0d3083371b9fcdecf9ff486a6dd6e7e8e13c 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Y2J 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e5233688cead0d3083371b9fcdecf9ff486a6dd6e7e8e13c 2 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e5233688cead0d3083371b9fcdecf9ff486a6dd6e7e8e13c 2 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e5233688cead0d3083371b9fcdecf9ff486a6dd6e7e8e13c 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Y2J 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Y2J 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Y2J 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=62f3d9a4f36a639bc09b872b4ca492786168236d53b77a42 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Mdy 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 62f3d9a4f36a639bc09b872b4ca492786168236d53b77a42 2 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 62f3d9a4f36a639bc09b872b4ca492786168236d53b77a42 2 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=62f3d9a4f36a639bc09b872b4ca492786168236d53b77a42 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Mdy 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Mdy 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Mdy 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=87e968961991cdbcd4740c3aa1b4fa08 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.A09 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 87e968961991cdbcd4740c3aa1b4fa08 1 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 87e968961991cdbcd4740c3aa1b4fa08 1 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=87e968961991cdbcd4740c3aa1b4fa08 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:18.125 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.A09 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.A09 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.A09 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8fb6a4b9bb39d9e7eeff5f8f795818f902417cbdabdaf29a9cf45d7c35b31b3e 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bxI 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8fb6a4b9bb39d9e7eeff5f8f795818f902417cbdabdaf29a9cf45d7c35b31b3e 3 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8fb6a4b9bb39d9e7eeff5f8f795818f902417cbdabdaf29a9cf45d7c35b31b3e 3 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8fb6a4b9bb39d9e7eeff5f8f795818f902417cbdabdaf29a9cf45d7c35b31b3e 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bxI 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bxI 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.bxI 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1342174 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1342174 ']' 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.384 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1342209 /var/tmp/host.sock 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1342209 ']' 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:18.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.w7p 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.w7p 00:16:18.644 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.w7p 00:16:18.903 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.PWA ]] 00:16:18.903 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PWA 00:16:18.903 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.903 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.903 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.903 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PWA 00:16:18.903 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PWA 00:16:19.162 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:19.162 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.I28 00:16:19.162 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.162 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.162 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.162 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.I28 00:16:19.162 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.I28 00:16:19.421 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Y2J ]] 00:16:19.421 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Y2J 00:16:19.421 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.421 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.422 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.422 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Y2J 00:16:19.422 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Y2J 00:16:19.422 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:19.422 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Mdy 00:16:19.422 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.422 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.422 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.422 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Mdy 00:16:19.422 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Mdy 00:16:19.681 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.A09 ]] 00:16:19.681 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A09 00:16:19.681 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.681 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.681 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.681 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A09 00:16:19.681 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A09 00:16:19.940 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:19.940 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bxI 00:16:19.940 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.940 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.940 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.940 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.bxI 00:16:19.940 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.bxI 00:16:20.199 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.200 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.459 00:16:20.459 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.459 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.459 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.718 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.718 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.718 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.718 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.718 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.718 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.718 { 00:16:20.718 "cntlid": 1, 00:16:20.718 "qid": 0, 00:16:20.718 "state": "enabled", 00:16:20.718 "thread": "nvmf_tgt_poll_group_000", 00:16:20.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.718 "listen_address": { 00:16:20.718 "trtype": "TCP", 00:16:20.718 "adrfam": "IPv4", 00:16:20.718 "traddr": "10.0.0.2", 00:16:20.718 "trsvcid": "4420" 00:16:20.718 }, 00:16:20.718 "peer_address": { 00:16:20.718 "trtype": "TCP", 00:16:20.718 "adrfam": "IPv4", 00:16:20.718 "traddr": "10.0.0.1", 00:16:20.718 "trsvcid": "36606" 00:16:20.718 }, 00:16:20.718 "auth": { 00:16:20.718 "state": "completed", 00:16:20.718 "digest": "sha256", 00:16:20.718 "dhgroup": "null" 00:16:20.718 } 00:16:20.718 } 00:16:20.718 ]' 00:16:20.718 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.718 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.718 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.718 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.718 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.977 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.977 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.977 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.977 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:20.977 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:21.544 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.544 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.544 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.544 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.544 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.544 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.544 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.544 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.803 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:21.803 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.803 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.803 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.803 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.803 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.803 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.803 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.803 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.803 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.803 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.803 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.803 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.062 00:16:22.062 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.062 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.062 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.321 { 00:16:22.321 "cntlid": 3, 00:16:22.321 "qid": 0, 00:16:22.321 "state": "enabled", 00:16:22.321 "thread": "nvmf_tgt_poll_group_000", 00:16:22.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.321 "listen_address": { 00:16:22.321 "trtype": "TCP", 00:16:22.321 "adrfam": "IPv4", 00:16:22.321 "traddr": "10.0.0.2", 00:16:22.321 "trsvcid": "4420" 00:16:22.321 }, 00:16:22.321 "peer_address": { 00:16:22.321 "trtype": "TCP", 00:16:22.321 "adrfam": "IPv4", 00:16:22.321 "traddr": "10.0.0.1", 00:16:22.321 "trsvcid": "32960" 00:16:22.321 }, 00:16:22.321 "auth": { 00:16:22.321 "state": "completed", 00:16:22.321 "digest": "sha256", 00:16:22.321 "dhgroup": "null" 00:16:22.321 } 00:16:22.321 } 00:16:22.321 ]' 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.321 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.580 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.580 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:22.580 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:23.145 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.146 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.146 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.146 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.146 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.146 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.146 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.146 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.404 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:23.404 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.404 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.404 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.405 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.405 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.405 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.405 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.405 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.405 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.405 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.405 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.405 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.664 00:16:23.664 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.664 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.664 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.922 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.922 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.922 { 00:16:23.922 "cntlid": 5, 00:16:23.922 "qid": 0, 00:16:23.922 "state": "enabled", 00:16:23.922 "thread": "nvmf_tgt_poll_group_000", 00:16:23.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.922 "listen_address": { 00:16:23.922 "trtype": "TCP", 00:16:23.922 "adrfam": "IPv4", 00:16:23.922 "traddr": "10.0.0.2", 00:16:23.922 "trsvcid": "4420" 00:16:23.922 }, 00:16:23.922 "peer_address": { 00:16:23.922 "trtype": "TCP", 00:16:23.922 "adrfam": "IPv4", 00:16:23.922 "traddr": "10.0.0.1", 00:16:23.922 "trsvcid": "32982" 00:16:23.922 }, 00:16:23.922 "auth": { 00:16:23.922 "state": "completed", 00:16:23.922 "digest": "sha256", 00:16:23.922 "dhgroup": "null" 00:16:23.922 } 00:16:23.922 } 00:16:23.922 ]' 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.922 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.181 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:24.181 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:24.749 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.749 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.749 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.749 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.749 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.749 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.749 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.749 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.009 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.268 00:16:25.268 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.268 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.268 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.527 { 00:16:25.527 "cntlid": 7, 00:16:25.527 "qid": 0, 00:16:25.527 "state": "enabled", 00:16:25.527 "thread": "nvmf_tgt_poll_group_000", 00:16:25.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.527 "listen_address": { 00:16:25.527 "trtype": "TCP", 00:16:25.527 "adrfam": "IPv4", 00:16:25.527 "traddr": "10.0.0.2", 00:16:25.527 "trsvcid": "4420" 00:16:25.527 }, 00:16:25.527 "peer_address": { 00:16:25.527 "trtype": "TCP", 00:16:25.527 "adrfam": "IPv4", 00:16:25.527 "traddr": "10.0.0.1", 00:16:25.527 "trsvcid": "33028" 00:16:25.527 }, 00:16:25.527 "auth": { 00:16:25.527 "state": "completed", 00:16:25.527 "digest": "sha256", 00:16:25.527 "dhgroup": "null" 00:16:25.527 } 00:16:25.527 } 00:16:25.527 ]' 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.527 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.786 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:25.786 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:26.354 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.354 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.354 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.354 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.354 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.354 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.354 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.354 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.354 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.613 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:26.613 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.613 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.613 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.613 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.613 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.613 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.613 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.613 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.614 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.614 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.614 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.614 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.873 00:16:26.873 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.873 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.873 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.132 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.132 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.132 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.132 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.132 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.132 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.132 { 00:16:27.132 "cntlid": 9, 00:16:27.132 "qid": 0, 00:16:27.132 "state": "enabled", 00:16:27.132 "thread": "nvmf_tgt_poll_group_000", 00:16:27.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.132 "listen_address": { 00:16:27.132 "trtype": "TCP", 00:16:27.132 "adrfam": "IPv4", 00:16:27.132 "traddr": "10.0.0.2", 00:16:27.132 "trsvcid": "4420" 00:16:27.132 }, 00:16:27.132 "peer_address": { 00:16:27.132 "trtype": "TCP", 00:16:27.132 "adrfam": "IPv4", 00:16:27.132 "traddr": "10.0.0.1", 00:16:27.132 "trsvcid": "33050" 00:16:27.132 }, 00:16:27.132 "auth": { 00:16:27.132 "state": "completed", 00:16:27.132 "digest": "sha256", 00:16:27.132 "dhgroup": "ffdhe2048" 00:16:27.132 } 00:16:27.132 } 00:16:27.132 ]' 00:16:27.132 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.132 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.132 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.132 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.132 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.133 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.133 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.133 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.391 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:27.391 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:27.959 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.959 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.959 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.959 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.959 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.959 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.959 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.959 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.218 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.477 00:16:28.477 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.477 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.477 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.737 { 00:16:28.737 "cntlid": 11, 00:16:28.737 "qid": 0, 00:16:28.737 "state": "enabled", 00:16:28.737 "thread": "nvmf_tgt_poll_group_000", 00:16:28.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.737 "listen_address": { 00:16:28.737 "trtype": "TCP", 00:16:28.737 "adrfam": "IPv4", 00:16:28.737 "traddr": "10.0.0.2", 00:16:28.737 "trsvcid": "4420" 00:16:28.737 }, 00:16:28.737 "peer_address": { 00:16:28.737 "trtype": "TCP", 00:16:28.737 "adrfam": "IPv4", 00:16:28.737 "traddr": "10.0.0.1", 00:16:28.737 "trsvcid": "33072" 00:16:28.737 }, 00:16:28.737 "auth": { 00:16:28.737 "state": "completed", 00:16:28.737 "digest": "sha256", 00:16:28.737 "dhgroup": "ffdhe2048" 00:16:28.737 } 00:16:28.737 } 00:16:28.737 ]' 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.737 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.995 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:28.995 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:29.563 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.563 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.563 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.563 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.563 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.563 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.563 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.563 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.823 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.083 00:16:30.083 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.083 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.083 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.341 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.341 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.341 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.341 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.341 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.342 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.342 { 00:16:30.342 "cntlid": 13, 00:16:30.342 "qid": 0, 00:16:30.342 "state": "enabled", 00:16:30.342 "thread": "nvmf_tgt_poll_group_000", 00:16:30.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.342 "listen_address": { 00:16:30.342 "trtype": "TCP", 00:16:30.342 "adrfam": "IPv4", 00:16:30.342 "traddr": "10.0.0.2", 00:16:30.342 "trsvcid": "4420" 00:16:30.342 }, 00:16:30.342 "peer_address": { 00:16:30.342 "trtype": "TCP", 00:16:30.342 "adrfam": "IPv4", 00:16:30.342 "traddr": "10.0.0.1", 00:16:30.342 "trsvcid": "33094" 00:16:30.342 }, 00:16:30.342 "auth": { 00:16:30.342 "state": "completed", 00:16:30.342 "digest": "sha256", 00:16:30.342 "dhgroup": "ffdhe2048" 00:16:30.342 } 00:16:30.342 } 00:16:30.342 ]' 00:16:30.342 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.342 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.342 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.342 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.342 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.342 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.342 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.342 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.600 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:30.600 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:31.167 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.167 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.167 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.167 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.167 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.167 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.167 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.167 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.426 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.686 00:16:31.686 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.686 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.686 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.686 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.945 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.945 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.945 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.945 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.945 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.945 { 00:16:31.945 "cntlid": 15, 00:16:31.945 "qid": 0, 00:16:31.945 "state": "enabled", 00:16:31.945 "thread": "nvmf_tgt_poll_group_000", 00:16:31.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.945 "listen_address": { 00:16:31.945 "trtype": "TCP", 00:16:31.945 "adrfam": "IPv4", 00:16:31.945 "traddr": "10.0.0.2", 00:16:31.945 "trsvcid": "4420" 00:16:31.945 }, 00:16:31.945 "peer_address": { 00:16:31.945 "trtype": "TCP", 00:16:31.945 "adrfam": "IPv4", 00:16:31.945 "traddr": "10.0.0.1", 00:16:31.945 "trsvcid": "33110" 00:16:31.945 }, 00:16:31.945 "auth": { 00:16:31.945 "state": "completed", 00:16:31.945 "digest": "sha256", 00:16:31.945 "dhgroup": "ffdhe2048" 00:16:31.945 } 00:16:31.945 } 00:16:31.945 ]' 00:16:31.945 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.945 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.945 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.945 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.945 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.945 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.945 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.945 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.204 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:32.204 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:32.772 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.772 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.772 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.772 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.772 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.772 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.772 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.772 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.772 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.030 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.289 00:16:33.289 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.289 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.289 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.289 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.289 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.289 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.289 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.548 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.548 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.548 { 00:16:33.548 "cntlid": 17, 00:16:33.548 "qid": 0, 00:16:33.548 "state": "enabled", 00:16:33.549 "thread": "nvmf_tgt_poll_group_000", 00:16:33.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.549 "listen_address": { 00:16:33.549 "trtype": "TCP", 00:16:33.549 "adrfam": "IPv4", 00:16:33.549 "traddr": "10.0.0.2", 00:16:33.549 "trsvcid": "4420" 00:16:33.549 }, 00:16:33.549 "peer_address": { 00:16:33.549 "trtype": "TCP", 00:16:33.549 "adrfam": "IPv4", 00:16:33.549 "traddr": "10.0.0.1", 00:16:33.549 "trsvcid": "48296" 00:16:33.549 }, 00:16:33.549 "auth": { 00:16:33.549 "state": "completed", 00:16:33.549 "digest": "sha256", 00:16:33.549 "dhgroup": "ffdhe3072" 00:16:33.549 } 00:16:33.549 } 00:16:33.549 ]' 00:16:33.549 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.549 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.549 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.549 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.549 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.549 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.549 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.549 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.807 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:33.807 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:34.375 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.375 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.375 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.375 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.375 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.375 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.375 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.375 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.634 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.894 00:16:34.894 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.894 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.894 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.153 { 00:16:35.153 "cntlid": 19, 00:16:35.153 "qid": 0, 00:16:35.153 "state": "enabled", 00:16:35.153 "thread": "nvmf_tgt_poll_group_000", 00:16:35.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.153 "listen_address": { 00:16:35.153 "trtype": "TCP", 00:16:35.153 "adrfam": "IPv4", 00:16:35.153 "traddr": "10.0.0.2", 00:16:35.153 "trsvcid": "4420" 00:16:35.153 }, 00:16:35.153 "peer_address": { 00:16:35.153 "trtype": "TCP", 00:16:35.153 "adrfam": "IPv4", 00:16:35.153 "traddr": "10.0.0.1", 00:16:35.153 "trsvcid": "48312" 00:16:35.153 }, 00:16:35.153 "auth": { 00:16:35.153 "state": "completed", 00:16:35.153 "digest": "sha256", 00:16:35.153 "dhgroup": "ffdhe3072" 00:16:35.153 } 00:16:35.153 } 00:16:35.153 ]' 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.153 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.412 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:35.412 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:35.980 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.980 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.980 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.980 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.980 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.980 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.980 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.980 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.239 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.499 00:16:36.499 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.499 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.499 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.759 { 00:16:36.759 "cntlid": 21, 00:16:36.759 "qid": 0, 00:16:36.759 "state": "enabled", 00:16:36.759 "thread": "nvmf_tgt_poll_group_000", 00:16:36.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.759 "listen_address": { 00:16:36.759 "trtype": "TCP", 00:16:36.759 "adrfam": "IPv4", 00:16:36.759 "traddr": "10.0.0.2", 00:16:36.759 "trsvcid": "4420" 00:16:36.759 }, 00:16:36.759 "peer_address": { 00:16:36.759 "trtype": "TCP", 00:16:36.759 "adrfam": "IPv4", 00:16:36.759 "traddr": "10.0.0.1", 00:16:36.759 "trsvcid": "48346" 00:16:36.759 }, 00:16:36.759 "auth": { 00:16:36.759 "state": "completed", 00:16:36.759 "digest": "sha256", 00:16:36.759 "dhgroup": "ffdhe3072" 00:16:36.759 } 00:16:36.759 } 00:16:36.759 ]' 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.759 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.019 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:37.019 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:37.587 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.587 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.587 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.587 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.587 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.587 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.587 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.587 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.846 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.105 00:16:38.105 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.105 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.106 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.365 { 00:16:38.365 "cntlid": 23, 00:16:38.365 "qid": 0, 00:16:38.365 "state": "enabled", 00:16:38.365 "thread": "nvmf_tgt_poll_group_000", 00:16:38.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.365 "listen_address": { 00:16:38.365 "trtype": "TCP", 00:16:38.365 "adrfam": "IPv4", 00:16:38.365 "traddr": "10.0.0.2", 00:16:38.365 "trsvcid": "4420" 00:16:38.365 }, 00:16:38.365 "peer_address": { 00:16:38.365 "trtype": "TCP", 00:16:38.365 "adrfam": "IPv4", 00:16:38.365 "traddr": "10.0.0.1", 00:16:38.365 "trsvcid": "48372" 00:16:38.365 }, 00:16:38.365 "auth": { 00:16:38.365 "state": "completed", 00:16:38.365 "digest": "sha256", 00:16:38.365 "dhgroup": "ffdhe3072" 00:16:38.365 } 00:16:38.365 } 00:16:38.365 ]' 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.365 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.624 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:38.624 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:39.193 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.193 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.193 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.193 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.193 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.193 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.193 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.193 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.193 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.453 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.712 00:16:39.712 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.712 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.712 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.971 { 00:16:39.971 "cntlid": 25, 00:16:39.971 "qid": 0, 00:16:39.971 "state": "enabled", 00:16:39.971 "thread": "nvmf_tgt_poll_group_000", 00:16:39.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.971 "listen_address": { 00:16:39.971 "trtype": "TCP", 00:16:39.971 "adrfam": "IPv4", 00:16:39.971 "traddr": "10.0.0.2", 00:16:39.971 "trsvcid": "4420" 00:16:39.971 }, 00:16:39.971 "peer_address": { 00:16:39.971 "trtype": "TCP", 00:16:39.971 "adrfam": "IPv4", 00:16:39.971 "traddr": "10.0.0.1", 00:16:39.971 "trsvcid": "48406" 00:16:39.971 }, 00:16:39.971 "auth": { 00:16:39.971 "state": "completed", 00:16:39.971 "digest": "sha256", 00:16:39.971 "dhgroup": "ffdhe4096" 00:16:39.971 } 00:16:39.971 } 00:16:39.971 ]' 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.971 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.230 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:40.230 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:40.799 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.799 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.799 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.799 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.799 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.799 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.799 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.799 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.059 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.319 00:16:41.319 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.319 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.319 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.579 { 00:16:41.579 "cntlid": 27, 00:16:41.579 "qid": 0, 00:16:41.579 "state": "enabled", 00:16:41.579 "thread": "nvmf_tgt_poll_group_000", 00:16:41.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.579 "listen_address": { 00:16:41.579 "trtype": "TCP", 00:16:41.579 "adrfam": "IPv4", 00:16:41.579 "traddr": "10.0.0.2", 00:16:41.579 "trsvcid": "4420" 00:16:41.579 }, 00:16:41.579 "peer_address": { 00:16:41.579 "trtype": "TCP", 00:16:41.579 "adrfam": "IPv4", 00:16:41.579 "traddr": "10.0.0.1", 00:16:41.579 "trsvcid": "48432" 00:16:41.579 }, 00:16:41.579 "auth": { 00:16:41.579 "state": "completed", 00:16:41.579 "digest": "sha256", 00:16:41.579 "dhgroup": "ffdhe4096" 00:16:41.579 } 00:16:41.579 } 00:16:41.579 ]' 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.579 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.838 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:41.838 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:42.406 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.406 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.406 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.406 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.406 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.406 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.406 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.406 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.665 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.925 00:16:42.925 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.925 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.925 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.184 { 00:16:43.184 "cntlid": 29, 00:16:43.184 "qid": 0, 00:16:43.184 "state": "enabled", 00:16:43.184 "thread": "nvmf_tgt_poll_group_000", 00:16:43.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.184 "listen_address": { 00:16:43.184 "trtype": "TCP", 00:16:43.184 "adrfam": "IPv4", 00:16:43.184 "traddr": "10.0.0.2", 00:16:43.184 "trsvcid": "4420" 00:16:43.184 }, 00:16:43.184 "peer_address": { 00:16:43.184 "trtype": "TCP", 00:16:43.184 "adrfam": "IPv4", 00:16:43.184 "traddr": "10.0.0.1", 00:16:43.184 "trsvcid": "54168" 00:16:43.184 }, 00:16:43.184 "auth": { 00:16:43.184 "state": "completed", 00:16:43.184 "digest": "sha256", 00:16:43.184 "dhgroup": "ffdhe4096" 00:16:43.184 } 00:16:43.184 } 00:16:43.184 ]' 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.184 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.442 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:43.443 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:44.033 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.033 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.033 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.033 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.033 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.033 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.033 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.033 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.290 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.548 00:16:44.548 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.548 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.548 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.806 { 00:16:44.806 "cntlid": 31, 00:16:44.806 "qid": 0, 00:16:44.806 "state": "enabled", 00:16:44.806 "thread": "nvmf_tgt_poll_group_000", 00:16:44.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:44.806 "listen_address": { 00:16:44.806 "trtype": "TCP", 00:16:44.806 "adrfam": "IPv4", 00:16:44.806 "traddr": "10.0.0.2", 00:16:44.806 "trsvcid": "4420" 00:16:44.806 }, 00:16:44.806 "peer_address": { 00:16:44.806 "trtype": "TCP", 00:16:44.806 "adrfam": "IPv4", 00:16:44.806 "traddr": "10.0.0.1", 00:16:44.806 "trsvcid": "54206" 00:16:44.806 }, 00:16:44.806 "auth": { 00:16:44.806 "state": "completed", 00:16:44.806 "digest": "sha256", 00:16:44.806 "dhgroup": "ffdhe4096" 00:16:44.806 } 00:16:44.806 } 00:16:44.806 ]' 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.806 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.064 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:45.064 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:45.630 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.630 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.630 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.630 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.630 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.630 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.630 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.630 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.630 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.889 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.147 00:16:46.147 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.147 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.147 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.404 { 00:16:46.404 "cntlid": 33, 00:16:46.404 "qid": 0, 00:16:46.404 "state": "enabled", 00:16:46.404 "thread": "nvmf_tgt_poll_group_000", 00:16:46.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:46.404 "listen_address": { 00:16:46.404 "trtype": "TCP", 00:16:46.404 "adrfam": "IPv4", 00:16:46.404 "traddr": "10.0.0.2", 00:16:46.404 "trsvcid": "4420" 00:16:46.404 }, 00:16:46.404 "peer_address": { 00:16:46.404 "trtype": "TCP", 00:16:46.404 "adrfam": "IPv4", 00:16:46.404 "traddr": "10.0.0.1", 00:16:46.404 "trsvcid": "54240" 00:16:46.404 }, 00:16:46.404 "auth": { 00:16:46.404 "state": "completed", 00:16:46.404 "digest": "sha256", 00:16:46.404 "dhgroup": "ffdhe6144" 00:16:46.404 } 00:16:46.404 } 00:16:46.404 ]' 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.404 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.661 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:46.661 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:47.227 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.227 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.227 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.227 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.227 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.227 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.227 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.227 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.485 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.743 00:16:48.002 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.002 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.002 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.002 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.002 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.002 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.002 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.002 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.002 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.002 { 00:16:48.002 "cntlid": 35, 00:16:48.002 "qid": 0, 00:16:48.002 "state": "enabled", 00:16:48.002 "thread": "nvmf_tgt_poll_group_000", 00:16:48.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.002 "listen_address": { 00:16:48.002 "trtype": "TCP", 00:16:48.002 "adrfam": "IPv4", 00:16:48.002 "traddr": "10.0.0.2", 00:16:48.002 "trsvcid": "4420" 00:16:48.002 }, 00:16:48.002 "peer_address": { 00:16:48.002 "trtype": "TCP", 00:16:48.002 "adrfam": "IPv4", 00:16:48.002 "traddr": "10.0.0.1", 00:16:48.002 "trsvcid": "54260" 00:16:48.002 }, 00:16:48.002 "auth": { 00:16:48.002 "state": "completed", 00:16:48.002 "digest": "sha256", 00:16:48.002 "dhgroup": "ffdhe6144" 00:16:48.002 } 00:16:48.002 } 00:16:48.002 ]' 00:16:48.002 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.261 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.261 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.261 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.261 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.261 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.261 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.261 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.520 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:48.520 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.087 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.654 00:16:49.654 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.654 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.654 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.654 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.654 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.654 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.654 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.654 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.913 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.913 { 00:16:49.913 "cntlid": 37, 00:16:49.913 "qid": 0, 00:16:49.913 "state": "enabled", 00:16:49.913 "thread": "nvmf_tgt_poll_group_000", 00:16:49.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.913 "listen_address": { 00:16:49.913 "trtype": "TCP", 00:16:49.913 "adrfam": "IPv4", 00:16:49.913 "traddr": "10.0.0.2", 00:16:49.913 "trsvcid": "4420" 00:16:49.913 }, 00:16:49.913 "peer_address": { 00:16:49.913 "trtype": "TCP", 00:16:49.913 "adrfam": "IPv4", 00:16:49.913 "traddr": "10.0.0.1", 00:16:49.913 "trsvcid": "54278" 00:16:49.913 }, 00:16:49.913 "auth": { 00:16:49.913 "state": "completed", 00:16:49.913 "digest": "sha256", 00:16:49.913 "dhgroup": "ffdhe6144" 00:16:49.913 } 00:16:49.913 } 00:16:49.913 ]' 00:16:49.913 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.913 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.913 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.913 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.913 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.913 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.913 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.913 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.172 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:50.172 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:50.740 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.740 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.740 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.740 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.740 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.740 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.740 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.740 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.740 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:50.740 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.740 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.999 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.999 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.999 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.999 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:50.999 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.999 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.999 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.999 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.999 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.999 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.258 00:16:51.258 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.258 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.258 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.516 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.517 { 00:16:51.517 "cntlid": 39, 00:16:51.517 "qid": 0, 00:16:51.517 "state": "enabled", 00:16:51.517 "thread": "nvmf_tgt_poll_group_000", 00:16:51.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.517 "listen_address": { 00:16:51.517 "trtype": "TCP", 00:16:51.517 "adrfam": "IPv4", 00:16:51.517 "traddr": "10.0.0.2", 00:16:51.517 "trsvcid": "4420" 00:16:51.517 }, 00:16:51.517 "peer_address": { 00:16:51.517 "trtype": "TCP", 00:16:51.517 "adrfam": "IPv4", 00:16:51.517 "traddr": "10.0.0.1", 00:16:51.517 "trsvcid": "54296" 00:16:51.517 }, 00:16:51.517 "auth": { 00:16:51.517 "state": "completed", 00:16:51.517 "digest": "sha256", 00:16:51.517 "dhgroup": "ffdhe6144" 00:16:51.517 } 00:16:51.517 } 00:16:51.517 ]' 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.517 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.776 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:51.776 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:52.344 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.344 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.344 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.344 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.344 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.344 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.344 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.344 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.344 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.603 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.170 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.170 { 00:16:53.170 "cntlid": 41, 00:16:53.170 "qid": 0, 00:16:53.170 "state": "enabled", 00:16:53.170 "thread": "nvmf_tgt_poll_group_000", 00:16:53.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.170 "listen_address": { 00:16:53.170 "trtype": "TCP", 00:16:53.170 "adrfam": "IPv4", 00:16:53.170 "traddr": "10.0.0.2", 00:16:53.170 "trsvcid": "4420" 00:16:53.170 }, 00:16:53.170 "peer_address": { 00:16:53.170 "trtype": "TCP", 00:16:53.170 "adrfam": "IPv4", 00:16:53.170 "traddr": "10.0.0.1", 00:16:53.170 "trsvcid": "43308" 00:16:53.170 }, 00:16:53.170 "auth": { 00:16:53.170 "state": "completed", 00:16:53.170 "digest": "sha256", 00:16:53.170 "dhgroup": "ffdhe8192" 00:16:53.170 } 00:16:53.170 } 00:16:53.170 ]' 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.170 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.428 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.428 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.428 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.428 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.428 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.686 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:53.686 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.253 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.512 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.512 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.512 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.512 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.770 00:16:54.770 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.770 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.770 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.029 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.029 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.029 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.029 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.029 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.029 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.029 { 00:16:55.029 "cntlid": 43, 00:16:55.029 "qid": 0, 00:16:55.029 "state": "enabled", 00:16:55.029 "thread": "nvmf_tgt_poll_group_000", 00:16:55.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.029 "listen_address": { 00:16:55.029 "trtype": "TCP", 00:16:55.029 "adrfam": "IPv4", 00:16:55.029 "traddr": "10.0.0.2", 00:16:55.029 "trsvcid": "4420" 00:16:55.029 }, 00:16:55.029 "peer_address": { 00:16:55.029 "trtype": "TCP", 00:16:55.029 "adrfam": "IPv4", 00:16:55.029 "traddr": "10.0.0.1", 00:16:55.029 "trsvcid": "43328" 00:16:55.029 }, 00:16:55.029 "auth": { 00:16:55.029 "state": "completed", 00:16:55.029 "digest": "sha256", 00:16:55.029 "dhgroup": "ffdhe8192" 00:16:55.029 } 00:16:55.029 } 00:16:55.029 ]' 00:16:55.029 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.029 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.029 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.288 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.288 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.288 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.288 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.288 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.547 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:55.547 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.115 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.684 00:16:56.684 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.684 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.684 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.943 { 00:16:56.943 "cntlid": 45, 00:16:56.943 "qid": 0, 00:16:56.943 "state": "enabled", 00:16:56.943 "thread": "nvmf_tgt_poll_group_000", 00:16:56.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.943 "listen_address": { 00:16:56.943 "trtype": "TCP", 00:16:56.943 "adrfam": "IPv4", 00:16:56.943 "traddr": "10.0.0.2", 00:16:56.943 "trsvcid": "4420" 00:16:56.943 }, 00:16:56.943 "peer_address": { 00:16:56.943 "trtype": "TCP", 00:16:56.943 "adrfam": "IPv4", 00:16:56.943 "traddr": "10.0.0.1", 00:16:56.943 "trsvcid": "43348" 00:16:56.943 }, 00:16:56.943 "auth": { 00:16:56.943 "state": "completed", 00:16:56.943 "digest": "sha256", 00:16:56.943 "dhgroup": "ffdhe8192" 00:16:56.943 } 00:16:56.943 } 00:16:56.943 ]' 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.943 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.202 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:57.202 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:16:57.769 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.769 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.769 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.769 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.769 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.769 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.769 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.769 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.028 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.596 00:16:58.596 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.596 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.596 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.855 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.855 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.855 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.855 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.855 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.855 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.855 { 00:16:58.855 "cntlid": 47, 00:16:58.855 "qid": 0, 00:16:58.855 "state": "enabled", 00:16:58.855 "thread": "nvmf_tgt_poll_group_000", 00:16:58.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.855 "listen_address": { 00:16:58.855 "trtype": "TCP", 00:16:58.855 "adrfam": "IPv4", 00:16:58.855 "traddr": "10.0.0.2", 00:16:58.855 "trsvcid": "4420" 00:16:58.855 }, 00:16:58.855 "peer_address": { 00:16:58.855 "trtype": "TCP", 00:16:58.855 "adrfam": "IPv4", 00:16:58.855 "traddr": "10.0.0.1", 00:16:58.855 "trsvcid": "43376" 00:16:58.855 }, 00:16:58.855 "auth": { 00:16:58.855 "state": "completed", 00:16:58.855 "digest": "sha256", 00:16:58.855 "dhgroup": "ffdhe8192" 00:16:58.855 } 00:16:58.855 } 00:16:58.855 ]' 00:16:58.855 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.855 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.855 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.855 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.855 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.855 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.855 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.855 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.118 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:59.118 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:16:59.686 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.687 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.687 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.687 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.687 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.687 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:59.687 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.687 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.687 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.687 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.946 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.946 00:17:00.206 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.206 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.206 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.207 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.207 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.207 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.207 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.207 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.207 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.207 { 00:17:00.207 "cntlid": 49, 00:17:00.207 "qid": 0, 00:17:00.207 "state": "enabled", 00:17:00.207 "thread": "nvmf_tgt_poll_group_000", 00:17:00.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.207 "listen_address": { 00:17:00.207 "trtype": "TCP", 00:17:00.207 "adrfam": "IPv4", 00:17:00.207 "traddr": "10.0.0.2", 00:17:00.207 "trsvcid": "4420" 00:17:00.207 }, 00:17:00.207 "peer_address": { 00:17:00.207 "trtype": "TCP", 00:17:00.207 "adrfam": "IPv4", 00:17:00.207 "traddr": "10.0.0.1", 00:17:00.207 "trsvcid": "43392" 00:17:00.207 }, 00:17:00.207 "auth": { 00:17:00.207 "state": "completed", 00:17:00.207 "digest": "sha384", 00:17:00.207 "dhgroup": "null" 00:17:00.207 } 00:17:00.207 } 00:17:00.207 ]' 00:17:00.207 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.207 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.207 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.466 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:00.466 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.466 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.466 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.466 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.725 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:00.725 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.294 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.552 00:17:01.552 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.552 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.552 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.812 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.812 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.812 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.812 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.812 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.812 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.812 { 00:17:01.812 "cntlid": 51, 00:17:01.812 "qid": 0, 00:17:01.812 "state": "enabled", 00:17:01.812 "thread": "nvmf_tgt_poll_group_000", 00:17:01.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.812 "listen_address": { 00:17:01.812 "trtype": "TCP", 00:17:01.812 "adrfam": "IPv4", 00:17:01.812 "traddr": "10.0.0.2", 00:17:01.812 "trsvcid": "4420" 00:17:01.812 }, 00:17:01.812 "peer_address": { 00:17:01.812 "trtype": "TCP", 00:17:01.812 "adrfam": "IPv4", 00:17:01.812 "traddr": "10.0.0.1", 00:17:01.812 "trsvcid": "41506" 00:17:01.812 }, 00:17:01.812 "auth": { 00:17:01.812 "state": "completed", 00:17:01.812 "digest": "sha384", 00:17:01.812 "dhgroup": "null" 00:17:01.812 } 00:17:01.812 } 00:17:01.812 ]' 00:17:01.812 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.812 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.812 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.071 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:02.071 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.071 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.071 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.071 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.071 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:02.071 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:02.638 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.896 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.896 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.896 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.896 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.896 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.896 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.896 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.896 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:02.896 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.896 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.896 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.896 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.896 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.896 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.896 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.896 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.896 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.897 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.897 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.897 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.155 00:17:03.155 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.155 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.155 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.415 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.415 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.415 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.415 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.415 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.415 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.415 { 00:17:03.415 "cntlid": 53, 00:17:03.415 "qid": 0, 00:17:03.415 "state": "enabled", 00:17:03.415 "thread": "nvmf_tgt_poll_group_000", 00:17:03.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.415 "listen_address": { 00:17:03.415 "trtype": "TCP", 00:17:03.415 "adrfam": "IPv4", 00:17:03.415 "traddr": "10.0.0.2", 00:17:03.415 "trsvcid": "4420" 00:17:03.415 }, 00:17:03.415 "peer_address": { 00:17:03.415 "trtype": "TCP", 00:17:03.415 "adrfam": "IPv4", 00:17:03.415 "traddr": "10.0.0.1", 00:17:03.415 "trsvcid": "41532" 00:17:03.415 }, 00:17:03.415 "auth": { 00:17:03.415 "state": "completed", 00:17:03.415 "digest": "sha384", 00:17:03.415 "dhgroup": "null" 00:17:03.415 } 00:17:03.415 } 00:17:03.415 ]' 00:17:03.415 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.415 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.415 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.415 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.415 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.674 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.674 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.674 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.674 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:03.674 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:04.242 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.242 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.242 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.242 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.242 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.242 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.242 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.242 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.502 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.761 00:17:04.761 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.761 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.761 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.020 { 00:17:05.020 "cntlid": 55, 00:17:05.020 "qid": 0, 00:17:05.020 "state": "enabled", 00:17:05.020 "thread": "nvmf_tgt_poll_group_000", 00:17:05.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:05.020 "listen_address": { 00:17:05.020 "trtype": "TCP", 00:17:05.020 "adrfam": "IPv4", 00:17:05.020 "traddr": "10.0.0.2", 00:17:05.020 "trsvcid": "4420" 00:17:05.020 }, 00:17:05.020 "peer_address": { 00:17:05.020 "trtype": "TCP", 00:17:05.020 "adrfam": "IPv4", 00:17:05.020 "traddr": "10.0.0.1", 00:17:05.020 "trsvcid": "41558" 00:17:05.020 }, 00:17:05.020 "auth": { 00:17:05.020 "state": "completed", 00:17:05.020 "digest": "sha384", 00:17:05.020 "dhgroup": "null" 00:17:05.020 } 00:17:05.020 } 00:17:05.020 ]' 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.020 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.280 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:05.280 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:05.848 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.848 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.848 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.848 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.848 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.848 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.848 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.848 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.848 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.108 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.367 00:17:06.367 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.367 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.367 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.627 { 00:17:06.627 "cntlid": 57, 00:17:06.627 "qid": 0, 00:17:06.627 "state": "enabled", 00:17:06.627 "thread": "nvmf_tgt_poll_group_000", 00:17:06.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.627 "listen_address": { 00:17:06.627 "trtype": "TCP", 00:17:06.627 "adrfam": "IPv4", 00:17:06.627 "traddr": "10.0.0.2", 00:17:06.627 "trsvcid": "4420" 00:17:06.627 }, 00:17:06.627 "peer_address": { 00:17:06.627 "trtype": "TCP", 00:17:06.627 "adrfam": "IPv4", 00:17:06.627 "traddr": "10.0.0.1", 00:17:06.627 "trsvcid": "41600" 00:17:06.627 }, 00:17:06.627 "auth": { 00:17:06.627 "state": "completed", 00:17:06.627 "digest": "sha384", 00:17:06.627 "dhgroup": "ffdhe2048" 00:17:06.627 } 00:17:06.627 } 00:17:06.627 ]' 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.627 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.886 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:06.886 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:07.454 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.454 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.454 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.454 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.454 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.454 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.454 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.454 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.713 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.971 00:17:07.971 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.971 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.971 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.230 { 00:17:08.230 "cntlid": 59, 00:17:08.230 "qid": 0, 00:17:08.230 "state": "enabled", 00:17:08.230 "thread": "nvmf_tgt_poll_group_000", 00:17:08.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.230 "listen_address": { 00:17:08.230 "trtype": "TCP", 00:17:08.230 "adrfam": "IPv4", 00:17:08.230 "traddr": "10.0.0.2", 00:17:08.230 "trsvcid": "4420" 00:17:08.230 }, 00:17:08.230 "peer_address": { 00:17:08.230 "trtype": "TCP", 00:17:08.230 "adrfam": "IPv4", 00:17:08.230 "traddr": "10.0.0.1", 00:17:08.230 "trsvcid": "41624" 00:17:08.230 }, 00:17:08.230 "auth": { 00:17:08.230 "state": "completed", 00:17:08.230 "digest": "sha384", 00:17:08.230 "dhgroup": "ffdhe2048" 00:17:08.230 } 00:17:08.230 } 00:17:08.230 ]' 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.230 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.489 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:08.489 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:09.056 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.056 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.056 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.056 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.056 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.056 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.056 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.056 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.315 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.316 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.574 00:17:09.574 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.574 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.575 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.833 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.833 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.833 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.833 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.833 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.833 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.833 { 00:17:09.833 "cntlid": 61, 00:17:09.833 "qid": 0, 00:17:09.833 "state": "enabled", 00:17:09.833 "thread": "nvmf_tgt_poll_group_000", 00:17:09.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.833 "listen_address": { 00:17:09.833 "trtype": "TCP", 00:17:09.833 "adrfam": "IPv4", 00:17:09.833 "traddr": "10.0.0.2", 00:17:09.833 "trsvcid": "4420" 00:17:09.833 }, 00:17:09.833 "peer_address": { 00:17:09.833 "trtype": "TCP", 00:17:09.833 "adrfam": "IPv4", 00:17:09.833 "traddr": "10.0.0.1", 00:17:09.833 "trsvcid": "41656" 00:17:09.833 }, 00:17:09.833 "auth": { 00:17:09.833 "state": "completed", 00:17:09.833 "digest": "sha384", 00:17:09.833 "dhgroup": "ffdhe2048" 00:17:09.833 } 00:17:09.833 } 00:17:09.833 ]' 00:17:09.833 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.834 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.834 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.834 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.834 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.834 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.834 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.834 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.092 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:10.092 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:10.659 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.659 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.659 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.659 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.659 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.659 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.659 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.659 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.918 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.177 00:17:11.177 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.177 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.177 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.435 { 00:17:11.435 "cntlid": 63, 00:17:11.435 "qid": 0, 00:17:11.435 "state": "enabled", 00:17:11.435 "thread": "nvmf_tgt_poll_group_000", 00:17:11.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:11.435 "listen_address": { 00:17:11.435 "trtype": "TCP", 00:17:11.435 "adrfam": "IPv4", 00:17:11.435 "traddr": "10.0.0.2", 00:17:11.435 "trsvcid": "4420" 00:17:11.435 }, 00:17:11.435 "peer_address": { 00:17:11.435 "trtype": "TCP", 00:17:11.435 "adrfam": "IPv4", 00:17:11.435 "traddr": "10.0.0.1", 00:17:11.435 "trsvcid": "41694" 00:17:11.435 }, 00:17:11.435 "auth": { 00:17:11.435 "state": "completed", 00:17:11.435 "digest": "sha384", 00:17:11.435 "dhgroup": "ffdhe2048" 00:17:11.435 } 00:17:11.435 } 00:17:11.435 ]' 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.435 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.696 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:11.696 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:12.263 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.263 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.263 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.263 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.263 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.263 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.263 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.263 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.263 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.523 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.782 00:17:12.782 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.782 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.782 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.040 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.040 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.040 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.040 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.040 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.040 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.040 { 00:17:13.040 "cntlid": 65, 00:17:13.040 "qid": 0, 00:17:13.040 "state": "enabled", 00:17:13.040 "thread": "nvmf_tgt_poll_group_000", 00:17:13.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:13.040 "listen_address": { 00:17:13.040 "trtype": "TCP", 00:17:13.040 "adrfam": "IPv4", 00:17:13.040 "traddr": "10.0.0.2", 00:17:13.040 "trsvcid": "4420" 00:17:13.040 }, 00:17:13.040 "peer_address": { 00:17:13.040 "trtype": "TCP", 00:17:13.040 "adrfam": "IPv4", 00:17:13.040 "traddr": "10.0.0.1", 00:17:13.040 "trsvcid": "36016" 00:17:13.040 }, 00:17:13.040 "auth": { 00:17:13.040 "state": "completed", 00:17:13.040 "digest": "sha384", 00:17:13.040 "dhgroup": "ffdhe3072" 00:17:13.040 } 00:17:13.040 } 00:17:13.040 ]' 00:17:13.040 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.040 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.041 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.041 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.041 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.041 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.041 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.041 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.299 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:13.299 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:13.865 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.865 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.865 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.865 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.865 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.865 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.865 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.865 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.124 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.383 00:17:14.383 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.383 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.383 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.642 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.642 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.642 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.642 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.642 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.642 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.642 { 00:17:14.642 "cntlid": 67, 00:17:14.642 "qid": 0, 00:17:14.642 "state": "enabled", 00:17:14.642 "thread": "nvmf_tgt_poll_group_000", 00:17:14.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:14.642 "listen_address": { 00:17:14.642 "trtype": "TCP", 00:17:14.642 "adrfam": "IPv4", 00:17:14.642 "traddr": "10.0.0.2", 00:17:14.642 "trsvcid": "4420" 00:17:14.642 }, 00:17:14.642 "peer_address": { 00:17:14.642 "trtype": "TCP", 00:17:14.642 "adrfam": "IPv4", 00:17:14.642 "traddr": "10.0.0.1", 00:17:14.642 "trsvcid": "36042" 00:17:14.642 }, 00:17:14.642 "auth": { 00:17:14.642 "state": "completed", 00:17:14.642 "digest": "sha384", 00:17:14.642 "dhgroup": "ffdhe3072" 00:17:14.642 } 00:17:14.642 } 00:17:14.642 ]' 00:17:14.642 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.643 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.643 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.643 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.643 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.643 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.643 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.643 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.901 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:14.901 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:15.469 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.469 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.469 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.469 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.469 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.469 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.469 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.469 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.727 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:15.727 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.727 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.727 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:15.727 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.727 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.728 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.728 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.728 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.728 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.728 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.728 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.728 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.986 00:17:15.986 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.986 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.986 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.246 { 00:17:16.246 "cntlid": 69, 00:17:16.246 "qid": 0, 00:17:16.246 "state": "enabled", 00:17:16.246 "thread": "nvmf_tgt_poll_group_000", 00:17:16.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:16.246 "listen_address": { 00:17:16.246 "trtype": "TCP", 00:17:16.246 "adrfam": "IPv4", 00:17:16.246 "traddr": "10.0.0.2", 00:17:16.246 "trsvcid": "4420" 00:17:16.246 }, 00:17:16.246 "peer_address": { 00:17:16.246 "trtype": "TCP", 00:17:16.246 "adrfam": "IPv4", 00:17:16.246 "traddr": "10.0.0.1", 00:17:16.246 "trsvcid": "36052" 00:17:16.246 }, 00:17:16.246 "auth": { 00:17:16.246 "state": "completed", 00:17:16.246 "digest": "sha384", 00:17:16.246 "dhgroup": "ffdhe3072" 00:17:16.246 } 00:17:16.246 } 00:17:16.246 ]' 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.246 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.506 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:16.506 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:17.076 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.076 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.076 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.076 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.076 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.076 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.076 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.076 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.336 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.595 00:17:17.595 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.595 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.595 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.854 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.854 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.854 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.854 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.854 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.854 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.854 { 00:17:17.854 "cntlid": 71, 00:17:17.854 "qid": 0, 00:17:17.854 "state": "enabled", 00:17:17.854 "thread": "nvmf_tgt_poll_group_000", 00:17:17.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:17.854 "listen_address": { 00:17:17.854 "trtype": "TCP", 00:17:17.854 "adrfam": "IPv4", 00:17:17.854 "traddr": "10.0.0.2", 00:17:17.854 "trsvcid": "4420" 00:17:17.854 }, 00:17:17.854 "peer_address": { 00:17:17.854 "trtype": "TCP", 00:17:17.854 "adrfam": "IPv4", 00:17:17.854 "traddr": "10.0.0.1", 00:17:17.854 "trsvcid": "36070" 00:17:17.854 }, 00:17:17.854 "auth": { 00:17:17.854 "state": "completed", 00:17:17.854 "digest": "sha384", 00:17:17.854 "dhgroup": "ffdhe3072" 00:17:17.854 } 00:17:17.854 } 00:17:17.854 ]' 00:17:17.854 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.854 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.854 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.854 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.854 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.854 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.854 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.854 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.113 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:18.113 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:18.681 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.681 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.681 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.681 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.681 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.681 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.681 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.681 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.681 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.940 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.199 00:17:19.199 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.199 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.199 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.459 { 00:17:19.459 "cntlid": 73, 00:17:19.459 "qid": 0, 00:17:19.459 "state": "enabled", 00:17:19.459 "thread": "nvmf_tgt_poll_group_000", 00:17:19.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:19.459 "listen_address": { 00:17:19.459 "trtype": "TCP", 00:17:19.459 "adrfam": "IPv4", 00:17:19.459 "traddr": "10.0.0.2", 00:17:19.459 "trsvcid": "4420" 00:17:19.459 }, 00:17:19.459 "peer_address": { 00:17:19.459 "trtype": "TCP", 00:17:19.459 "adrfam": "IPv4", 00:17:19.459 "traddr": "10.0.0.1", 00:17:19.459 "trsvcid": "36100" 00:17:19.459 }, 00:17:19.459 "auth": { 00:17:19.459 "state": "completed", 00:17:19.459 "digest": "sha384", 00:17:19.459 "dhgroup": "ffdhe4096" 00:17:19.459 } 00:17:19.459 } 00:17:19.459 ]' 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.459 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.717 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:19.717 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:20.285 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.285 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.285 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.285 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.285 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.285 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.285 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.285 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.543 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.803 00:17:20.803 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.803 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.803 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.062 { 00:17:21.062 "cntlid": 75, 00:17:21.062 "qid": 0, 00:17:21.062 "state": "enabled", 00:17:21.062 "thread": "nvmf_tgt_poll_group_000", 00:17:21.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:21.062 "listen_address": { 00:17:21.062 "trtype": "TCP", 00:17:21.062 "adrfam": "IPv4", 00:17:21.062 "traddr": "10.0.0.2", 00:17:21.062 "trsvcid": "4420" 00:17:21.062 }, 00:17:21.062 "peer_address": { 00:17:21.062 "trtype": "TCP", 00:17:21.062 "adrfam": "IPv4", 00:17:21.062 "traddr": "10.0.0.1", 00:17:21.062 "trsvcid": "36138" 00:17:21.062 }, 00:17:21.062 "auth": { 00:17:21.062 "state": "completed", 00:17:21.062 "digest": "sha384", 00:17:21.062 "dhgroup": "ffdhe4096" 00:17:21.062 } 00:17:21.062 } 00:17:21.062 ]' 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.062 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.321 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:21.321 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:21.889 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.889 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.889 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.889 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.889 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.889 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.889 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.889 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.148 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.407 00:17:22.407 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.407 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.407 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.665 { 00:17:22.665 "cntlid": 77, 00:17:22.665 "qid": 0, 00:17:22.665 "state": "enabled", 00:17:22.665 "thread": "nvmf_tgt_poll_group_000", 00:17:22.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:22.665 "listen_address": { 00:17:22.665 "trtype": "TCP", 00:17:22.665 "adrfam": "IPv4", 00:17:22.665 "traddr": "10.0.0.2", 00:17:22.665 "trsvcid": "4420" 00:17:22.665 }, 00:17:22.665 "peer_address": { 00:17:22.665 "trtype": "TCP", 00:17:22.665 "adrfam": "IPv4", 00:17:22.665 "traddr": "10.0.0.1", 00:17:22.665 "trsvcid": "39124" 00:17:22.665 }, 00:17:22.665 "auth": { 00:17:22.665 "state": "completed", 00:17:22.665 "digest": "sha384", 00:17:22.665 "dhgroup": "ffdhe4096" 00:17:22.665 } 00:17:22.665 } 00:17:22.665 ]' 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.665 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.666 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.925 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:22.925 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:23.493 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.493 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.493 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.493 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.493 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.493 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.493 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.493 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.752 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.011 00:17:24.011 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.011 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.011 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.270 { 00:17:24.270 "cntlid": 79, 00:17:24.270 "qid": 0, 00:17:24.270 "state": "enabled", 00:17:24.270 "thread": "nvmf_tgt_poll_group_000", 00:17:24.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:24.270 "listen_address": { 00:17:24.270 "trtype": "TCP", 00:17:24.270 "adrfam": "IPv4", 00:17:24.270 "traddr": "10.0.0.2", 00:17:24.270 "trsvcid": "4420" 00:17:24.270 }, 00:17:24.270 "peer_address": { 00:17:24.270 "trtype": "TCP", 00:17:24.270 "adrfam": "IPv4", 00:17:24.270 "traddr": "10.0.0.1", 00:17:24.270 "trsvcid": "39164" 00:17:24.270 }, 00:17:24.270 "auth": { 00:17:24.270 "state": "completed", 00:17:24.270 "digest": "sha384", 00:17:24.270 "dhgroup": "ffdhe4096" 00:17:24.270 } 00:17:24.270 } 00:17:24.270 ]' 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.270 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.529 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:24.529 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:25.098 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.098 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.098 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.098 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.098 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.098 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.098 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.098 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.098 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.358 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.617 00:17:25.617 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.617 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.617 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.876 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.876 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.876 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.876 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.876 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.876 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.876 { 00:17:25.876 "cntlid": 81, 00:17:25.876 "qid": 0, 00:17:25.876 "state": "enabled", 00:17:25.876 "thread": "nvmf_tgt_poll_group_000", 00:17:25.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:25.876 "listen_address": { 00:17:25.876 "trtype": "TCP", 00:17:25.876 "adrfam": "IPv4", 00:17:25.876 "traddr": "10.0.0.2", 00:17:25.876 "trsvcid": "4420" 00:17:25.876 }, 00:17:25.876 "peer_address": { 00:17:25.876 "trtype": "TCP", 00:17:25.876 "adrfam": "IPv4", 00:17:25.876 "traddr": "10.0.0.1", 00:17:25.876 "trsvcid": "39194" 00:17:25.876 }, 00:17:25.876 "auth": { 00:17:25.876 "state": "completed", 00:17:25.876 "digest": "sha384", 00:17:25.876 "dhgroup": "ffdhe6144" 00:17:25.876 } 00:17:25.876 } 00:17:25.876 ]' 00:17:25.876 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.876 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.876 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.136 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.136 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.136 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.136 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.136 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.136 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:26.136 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:26.704 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.704 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.704 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.704 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.964 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.964 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.964 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.964 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.964 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.531 00:17:27.532 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.532 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.532 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.532 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.532 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.532 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.532 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.532 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.532 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.532 { 00:17:27.532 "cntlid": 83, 00:17:27.532 "qid": 0, 00:17:27.532 "state": "enabled", 00:17:27.532 "thread": "nvmf_tgt_poll_group_000", 00:17:27.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:27.532 "listen_address": { 00:17:27.532 "trtype": "TCP", 00:17:27.532 "adrfam": "IPv4", 00:17:27.532 "traddr": "10.0.0.2", 00:17:27.532 "trsvcid": "4420" 00:17:27.532 }, 00:17:27.532 "peer_address": { 00:17:27.532 "trtype": "TCP", 00:17:27.532 "adrfam": "IPv4", 00:17:27.532 "traddr": "10.0.0.1", 00:17:27.532 "trsvcid": "39226" 00:17:27.532 }, 00:17:27.532 "auth": { 00:17:27.532 "state": "completed", 00:17:27.532 "digest": "sha384", 00:17:27.532 "dhgroup": "ffdhe6144" 00:17:27.532 } 00:17:27.532 } 00:17:27.532 ]' 00:17:27.532 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.532 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.796 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.796 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.796 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.796 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.796 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.796 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.055 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:28.055 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.621 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:28.622 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.622 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.622 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.622 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.622 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.880 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.880 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.880 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.880 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.139 00:17:29.139 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.139 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.139 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.398 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.398 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.398 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.398 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.398 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.398 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.398 { 00:17:29.398 "cntlid": 85, 00:17:29.398 "qid": 0, 00:17:29.398 "state": "enabled", 00:17:29.398 "thread": "nvmf_tgt_poll_group_000", 00:17:29.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:29.398 "listen_address": { 00:17:29.398 "trtype": "TCP", 00:17:29.398 "adrfam": "IPv4", 00:17:29.398 "traddr": "10.0.0.2", 00:17:29.398 "trsvcid": "4420" 00:17:29.398 }, 00:17:29.398 "peer_address": { 00:17:29.398 "trtype": "TCP", 00:17:29.398 "adrfam": "IPv4", 00:17:29.398 "traddr": "10.0.0.1", 00:17:29.398 "trsvcid": "39250" 00:17:29.398 }, 00:17:29.398 "auth": { 00:17:29.398 "state": "completed", 00:17:29.398 "digest": "sha384", 00:17:29.398 "dhgroup": "ffdhe6144" 00:17:29.398 } 00:17:29.398 } 00:17:29.398 ]' 00:17:29.398 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.398 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.398 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.398 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.399 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.399 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.399 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.399 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.657 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:29.658 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:30.225 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.225 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.225 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.225 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.225 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.225 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.225 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.225 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.485 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.744 00:17:30.744 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.744 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.744 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.004 { 00:17:31.004 "cntlid": 87, 00:17:31.004 "qid": 0, 00:17:31.004 "state": "enabled", 00:17:31.004 "thread": "nvmf_tgt_poll_group_000", 00:17:31.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:31.004 "listen_address": { 00:17:31.004 "trtype": "TCP", 00:17:31.004 "adrfam": "IPv4", 00:17:31.004 "traddr": "10.0.0.2", 00:17:31.004 "trsvcid": "4420" 00:17:31.004 }, 00:17:31.004 "peer_address": { 00:17:31.004 "trtype": "TCP", 00:17:31.004 "adrfam": "IPv4", 00:17:31.004 "traddr": "10.0.0.1", 00:17:31.004 "trsvcid": "39276" 00:17:31.004 }, 00:17:31.004 "auth": { 00:17:31.004 "state": "completed", 00:17:31.004 "digest": "sha384", 00:17:31.004 "dhgroup": "ffdhe6144" 00:17:31.004 } 00:17:31.004 } 00:17:31.004 ]' 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.004 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.263 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:31.263 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:31.831 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.831 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.831 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.831 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.831 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.831 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.831 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.831 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.831 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.090 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.658 00:17:32.658 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.658 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.658 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.658 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.658 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.658 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.658 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.658 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.658 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.658 { 00:17:32.658 "cntlid": 89, 00:17:32.658 "qid": 0, 00:17:32.658 "state": "enabled", 00:17:32.658 "thread": "nvmf_tgt_poll_group_000", 00:17:32.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:32.658 "listen_address": { 00:17:32.658 "trtype": "TCP", 00:17:32.658 "adrfam": "IPv4", 00:17:32.658 "traddr": "10.0.0.2", 00:17:32.658 "trsvcid": "4420" 00:17:32.658 }, 00:17:32.658 "peer_address": { 00:17:32.658 "trtype": "TCP", 00:17:32.658 "adrfam": "IPv4", 00:17:32.658 "traddr": "10.0.0.1", 00:17:32.658 "trsvcid": "56180" 00:17:32.658 }, 00:17:32.658 "auth": { 00:17:32.658 "state": "completed", 00:17:32.658 "digest": "sha384", 00:17:32.658 "dhgroup": "ffdhe8192" 00:17:32.658 } 00:17:32.658 } 00:17:32.658 ]' 00:17:32.658 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.918 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.918 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.918 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.918 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.918 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.918 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.918 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.176 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:33.176 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:33.744 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.744 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.744 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.744 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.744 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.744 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.744 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.744 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.003 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.263 00:17:34.263 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.263 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.263 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.522 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.522 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.522 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.522 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.522 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.522 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.522 { 00:17:34.522 "cntlid": 91, 00:17:34.522 "qid": 0, 00:17:34.522 "state": "enabled", 00:17:34.522 "thread": "nvmf_tgt_poll_group_000", 00:17:34.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:34.522 "listen_address": { 00:17:34.522 "trtype": "TCP", 00:17:34.522 "adrfam": "IPv4", 00:17:34.522 "traddr": "10.0.0.2", 00:17:34.522 "trsvcid": "4420" 00:17:34.522 }, 00:17:34.522 "peer_address": { 00:17:34.522 "trtype": "TCP", 00:17:34.522 "adrfam": "IPv4", 00:17:34.522 "traddr": "10.0.0.1", 00:17:34.522 "trsvcid": "56206" 00:17:34.522 }, 00:17:34.522 "auth": { 00:17:34.522 "state": "completed", 00:17:34.522 "digest": "sha384", 00:17:34.522 "dhgroup": "ffdhe8192" 00:17:34.522 } 00:17:34.522 } 00:17:34.522 ]' 00:17:34.522 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.522 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.522 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.782 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.782 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.782 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.782 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.782 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.782 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:34.782 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:35.348 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.348 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.348 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.348 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.348 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.348 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.348 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.348 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.607 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.608 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.176 00:17:36.176 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.176 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.176 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.435 { 00:17:36.435 "cntlid": 93, 00:17:36.435 "qid": 0, 00:17:36.435 "state": "enabled", 00:17:36.435 "thread": "nvmf_tgt_poll_group_000", 00:17:36.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:36.435 "listen_address": { 00:17:36.435 "trtype": "TCP", 00:17:36.435 "adrfam": "IPv4", 00:17:36.435 "traddr": "10.0.0.2", 00:17:36.435 "trsvcid": "4420" 00:17:36.435 }, 00:17:36.435 "peer_address": { 00:17:36.435 "trtype": "TCP", 00:17:36.435 "adrfam": "IPv4", 00:17:36.435 "traddr": "10.0.0.1", 00:17:36.435 "trsvcid": "56240" 00:17:36.435 }, 00:17:36.435 "auth": { 00:17:36.435 "state": "completed", 00:17:36.435 "digest": "sha384", 00:17:36.435 "dhgroup": "ffdhe8192" 00:17:36.435 } 00:17:36.435 } 00:17:36.435 ]' 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.435 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.694 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:36.694 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:37.260 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.260 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.260 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.260 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.260 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.260 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.260 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.260 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.518 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.085 00:17:38.085 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.085 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.085 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.085 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.085 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.085 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.085 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.085 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.085 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.085 { 00:17:38.085 "cntlid": 95, 00:17:38.085 "qid": 0, 00:17:38.085 "state": "enabled", 00:17:38.085 "thread": "nvmf_tgt_poll_group_000", 00:17:38.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:38.085 "listen_address": { 00:17:38.085 "trtype": "TCP", 00:17:38.085 "adrfam": "IPv4", 00:17:38.085 "traddr": "10.0.0.2", 00:17:38.085 "trsvcid": "4420" 00:17:38.085 }, 00:17:38.085 "peer_address": { 00:17:38.085 "trtype": "TCP", 00:17:38.085 "adrfam": "IPv4", 00:17:38.085 "traddr": "10.0.0.1", 00:17:38.085 "trsvcid": "56266" 00:17:38.085 }, 00:17:38.085 "auth": { 00:17:38.085 "state": "completed", 00:17:38.085 "digest": "sha384", 00:17:38.085 "dhgroup": "ffdhe8192" 00:17:38.085 } 00:17:38.085 } 00:17:38.085 ]' 00:17:38.085 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.343 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.343 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.343 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.343 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.343 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.343 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.343 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.602 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:38.602 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:39.170 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.170 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.170 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.170 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.170 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.170 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:39.170 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.170 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.170 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.170 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.428 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.686 00:17:39.686 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.686 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.686 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.686 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.686 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.686 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.686 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.686 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.686 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.686 { 00:17:39.686 "cntlid": 97, 00:17:39.686 "qid": 0, 00:17:39.686 "state": "enabled", 00:17:39.686 "thread": "nvmf_tgt_poll_group_000", 00:17:39.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:39.686 "listen_address": { 00:17:39.686 "trtype": "TCP", 00:17:39.686 "adrfam": "IPv4", 00:17:39.686 "traddr": "10.0.0.2", 00:17:39.686 "trsvcid": "4420" 00:17:39.686 }, 00:17:39.686 "peer_address": { 00:17:39.686 "trtype": "TCP", 00:17:39.686 "adrfam": "IPv4", 00:17:39.686 "traddr": "10.0.0.1", 00:17:39.686 "trsvcid": "56280" 00:17:39.686 }, 00:17:39.686 "auth": { 00:17:39.686 "state": "completed", 00:17:39.686 "digest": "sha512", 00:17:39.686 "dhgroup": "null" 00:17:39.686 } 00:17:39.686 } 00:17:39.686 ]' 00:17:39.686 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.944 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.944 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.944 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:39.944 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.944 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.944 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.944 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.202 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:40.202 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:40.770 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.770 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.770 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.770 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.770 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.770 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.771 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.771 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.771 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.029 00:17:41.029 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.029 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.029 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.288 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.288 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.288 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.288 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.288 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.288 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.288 { 00:17:41.288 "cntlid": 99, 00:17:41.288 "qid": 0, 00:17:41.288 "state": "enabled", 00:17:41.288 "thread": "nvmf_tgt_poll_group_000", 00:17:41.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:41.288 "listen_address": { 00:17:41.288 "trtype": "TCP", 00:17:41.288 "adrfam": "IPv4", 00:17:41.288 "traddr": "10.0.0.2", 00:17:41.288 "trsvcid": "4420" 00:17:41.288 }, 00:17:41.288 "peer_address": { 00:17:41.288 "trtype": "TCP", 00:17:41.288 "adrfam": "IPv4", 00:17:41.288 "traddr": "10.0.0.1", 00:17:41.288 "trsvcid": "56304" 00:17:41.288 }, 00:17:41.288 "auth": { 00:17:41.288 "state": "completed", 00:17:41.288 "digest": "sha512", 00:17:41.288 "dhgroup": "null" 00:17:41.288 } 00:17:41.288 } 00:17:41.288 ]' 00:17:41.288 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.288 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.288 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.288 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:41.288 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.548 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.548 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.548 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.548 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:41.548 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:42.115 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.115 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.115 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.115 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.374 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.633 00:17:42.633 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.633 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.633 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.893 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.893 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.893 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.893 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.893 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.894 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.894 { 00:17:42.894 "cntlid": 101, 00:17:42.894 "qid": 0, 00:17:42.894 "state": "enabled", 00:17:42.894 "thread": "nvmf_tgt_poll_group_000", 00:17:42.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:42.894 "listen_address": { 00:17:42.894 "trtype": "TCP", 00:17:42.894 "adrfam": "IPv4", 00:17:42.894 "traddr": "10.0.0.2", 00:17:42.894 "trsvcid": "4420" 00:17:42.894 }, 00:17:42.894 "peer_address": { 00:17:42.894 "trtype": "TCP", 00:17:42.894 "adrfam": "IPv4", 00:17:42.894 "traddr": "10.0.0.1", 00:17:42.894 "trsvcid": "45396" 00:17:42.894 }, 00:17:42.894 "auth": { 00:17:42.894 "state": "completed", 00:17:42.894 "digest": "sha512", 00:17:42.894 "dhgroup": "null" 00:17:42.894 } 00:17:42.894 } 00:17:42.894 ]' 00:17:42.894 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.894 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.894 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.894 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:42.894 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.152 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.152 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.152 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.152 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:43.152 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:43.720 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.720 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.720 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.720 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.720 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.720 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.720 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.720 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.979 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.238 00:17:44.238 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.238 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.238 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.496 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.496 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.496 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.496 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.496 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.496 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.496 { 00:17:44.496 "cntlid": 103, 00:17:44.496 "qid": 0, 00:17:44.496 "state": "enabled", 00:17:44.496 "thread": "nvmf_tgt_poll_group_000", 00:17:44.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:44.496 "listen_address": { 00:17:44.496 "trtype": "TCP", 00:17:44.496 "adrfam": "IPv4", 00:17:44.496 "traddr": "10.0.0.2", 00:17:44.496 "trsvcid": "4420" 00:17:44.496 }, 00:17:44.496 "peer_address": { 00:17:44.496 "trtype": "TCP", 00:17:44.496 "adrfam": "IPv4", 00:17:44.496 "traddr": "10.0.0.1", 00:17:44.496 "trsvcid": "45418" 00:17:44.497 }, 00:17:44.497 "auth": { 00:17:44.497 "state": "completed", 00:17:44.497 "digest": "sha512", 00:17:44.497 "dhgroup": "null" 00:17:44.497 } 00:17:44.497 } 00:17:44.497 ]' 00:17:44.497 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.497 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.497 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.497 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:44.497 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.497 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.497 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.497 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.756 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:44.756 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:45.325 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.325 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.325 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.325 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.325 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.325 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.325 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.325 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.325 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.585 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.845 00:17:45.845 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.845 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.845 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.104 { 00:17:46.104 "cntlid": 105, 00:17:46.104 "qid": 0, 00:17:46.104 "state": "enabled", 00:17:46.104 "thread": "nvmf_tgt_poll_group_000", 00:17:46.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:46.104 "listen_address": { 00:17:46.104 "trtype": "TCP", 00:17:46.104 "adrfam": "IPv4", 00:17:46.104 "traddr": "10.0.0.2", 00:17:46.104 "trsvcid": "4420" 00:17:46.104 }, 00:17:46.104 "peer_address": { 00:17:46.104 "trtype": "TCP", 00:17:46.104 "adrfam": "IPv4", 00:17:46.104 "traddr": "10.0.0.1", 00:17:46.104 "trsvcid": "45446" 00:17:46.104 }, 00:17:46.104 "auth": { 00:17:46.104 "state": "completed", 00:17:46.104 "digest": "sha512", 00:17:46.104 "dhgroup": "ffdhe2048" 00:17:46.104 } 00:17:46.104 } 00:17:46.104 ]' 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.104 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.363 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:46.363 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:46.931 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.931 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.931 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.931 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.931 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.931 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.931 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.931 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.190 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.191 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.450 00:17:47.450 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.450 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.450 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.450 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.450 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.450 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.450 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.450 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.450 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.450 { 00:17:47.450 "cntlid": 107, 00:17:47.450 "qid": 0, 00:17:47.450 "state": "enabled", 00:17:47.450 "thread": "nvmf_tgt_poll_group_000", 00:17:47.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:47.450 "listen_address": { 00:17:47.450 "trtype": "TCP", 00:17:47.450 "adrfam": "IPv4", 00:17:47.450 "traddr": "10.0.0.2", 00:17:47.450 "trsvcid": "4420" 00:17:47.450 }, 00:17:47.450 "peer_address": { 00:17:47.450 "trtype": "TCP", 00:17:47.450 "adrfam": "IPv4", 00:17:47.450 "traddr": "10.0.0.1", 00:17:47.450 "trsvcid": "45458" 00:17:47.450 }, 00:17:47.450 "auth": { 00:17:47.450 "state": "completed", 00:17:47.450 "digest": "sha512", 00:17:47.450 "dhgroup": "ffdhe2048" 00:17:47.450 } 00:17:47.450 } 00:17:47.450 ]' 00:17:47.450 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.710 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.710 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.710 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.710 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.710 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.710 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.710 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.969 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:47.969 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.536 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.537 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.537 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.537 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.537 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.537 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.795 00:17:48.795 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.795 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.795 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.053 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.053 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.053 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.053 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.053 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.053 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.053 { 00:17:49.053 "cntlid": 109, 00:17:49.053 "qid": 0, 00:17:49.053 "state": "enabled", 00:17:49.053 "thread": "nvmf_tgt_poll_group_000", 00:17:49.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:49.053 "listen_address": { 00:17:49.053 "trtype": "TCP", 00:17:49.053 "adrfam": "IPv4", 00:17:49.053 "traddr": "10.0.0.2", 00:17:49.053 "trsvcid": "4420" 00:17:49.053 }, 00:17:49.053 "peer_address": { 00:17:49.053 "trtype": "TCP", 00:17:49.053 "adrfam": "IPv4", 00:17:49.053 "traddr": "10.0.0.1", 00:17:49.053 "trsvcid": "45482" 00:17:49.053 }, 00:17:49.053 "auth": { 00:17:49.053 "state": "completed", 00:17:49.053 "digest": "sha512", 00:17:49.053 "dhgroup": "ffdhe2048" 00:17:49.053 } 00:17:49.053 } 00:17:49.053 ]' 00:17:49.053 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.053 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.053 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.312 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.312 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.312 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.312 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.312 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.312 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:49.312 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:49.879 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.879 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.879 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.879 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.138 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.397 00:17:50.397 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.397 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.397 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.656 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.656 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.656 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.656 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.656 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.656 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.656 { 00:17:50.656 "cntlid": 111, 00:17:50.656 "qid": 0, 00:17:50.656 "state": "enabled", 00:17:50.656 "thread": "nvmf_tgt_poll_group_000", 00:17:50.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:50.656 "listen_address": { 00:17:50.656 "trtype": "TCP", 00:17:50.656 "adrfam": "IPv4", 00:17:50.656 "traddr": "10.0.0.2", 00:17:50.656 "trsvcid": "4420" 00:17:50.656 }, 00:17:50.656 "peer_address": { 00:17:50.656 "trtype": "TCP", 00:17:50.656 "adrfam": "IPv4", 00:17:50.656 "traddr": "10.0.0.1", 00:17:50.656 "trsvcid": "45514" 00:17:50.656 }, 00:17:50.656 "auth": { 00:17:50.656 "state": "completed", 00:17:50.656 "digest": "sha512", 00:17:50.656 "dhgroup": "ffdhe2048" 00:17:50.656 } 00:17:50.656 } 00:17:50.656 ]' 00:17:50.656 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.656 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.656 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.656 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.656 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.915 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.915 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.915 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.915 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:50.915 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:51.482 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.482 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.482 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.482 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.482 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.482 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.482 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.482 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.482 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.741 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.000 00:17:52.000 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.000 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.000 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.258 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.258 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.258 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.258 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.258 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.258 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.258 { 00:17:52.258 "cntlid": 113, 00:17:52.258 "qid": 0, 00:17:52.258 "state": "enabled", 00:17:52.258 "thread": "nvmf_tgt_poll_group_000", 00:17:52.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.258 "listen_address": { 00:17:52.258 "trtype": "TCP", 00:17:52.258 "adrfam": "IPv4", 00:17:52.258 "traddr": "10.0.0.2", 00:17:52.258 "trsvcid": "4420" 00:17:52.258 }, 00:17:52.258 "peer_address": { 00:17:52.258 "trtype": "TCP", 00:17:52.258 "adrfam": "IPv4", 00:17:52.258 "traddr": "10.0.0.1", 00:17:52.258 "trsvcid": "35198" 00:17:52.258 }, 00:17:52.258 "auth": { 00:17:52.258 "state": "completed", 00:17:52.258 "digest": "sha512", 00:17:52.258 "dhgroup": "ffdhe3072" 00:17:52.258 } 00:17:52.258 } 00:17:52.258 ]' 00:17:52.258 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.258 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.258 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.518 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.518 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.518 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.518 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.518 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.518 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:52.518 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:53.083 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.342 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.343 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.602 00:17:53.602 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.602 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.602 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.861 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.861 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.861 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.861 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.861 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.861 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.861 { 00:17:53.861 "cntlid": 115, 00:17:53.861 "qid": 0, 00:17:53.861 "state": "enabled", 00:17:53.861 "thread": "nvmf_tgt_poll_group_000", 00:17:53.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:53.861 "listen_address": { 00:17:53.861 "trtype": "TCP", 00:17:53.861 "adrfam": "IPv4", 00:17:53.861 "traddr": "10.0.0.2", 00:17:53.861 "trsvcid": "4420" 00:17:53.861 }, 00:17:53.861 "peer_address": { 00:17:53.861 "trtype": "TCP", 00:17:53.861 "adrfam": "IPv4", 00:17:53.862 "traddr": "10.0.0.1", 00:17:53.862 "trsvcid": "35228" 00:17:53.862 }, 00:17:53.862 "auth": { 00:17:53.862 "state": "completed", 00:17:53.862 "digest": "sha512", 00:17:53.862 "dhgroup": "ffdhe3072" 00:17:53.862 } 00:17:53.862 } 00:17:53.862 ]' 00:17:53.862 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.862 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.862 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.120 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.120 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.120 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.120 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.120 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.120 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:54.120 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:17:54.688 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.688 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.688 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.688 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.688 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.688 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.688 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.688 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.947 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.206 00:17:55.206 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.206 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.206 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.466 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.466 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.466 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.466 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.466 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.466 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.466 { 00:17:55.466 "cntlid": 117, 00:17:55.466 "qid": 0, 00:17:55.466 "state": "enabled", 00:17:55.466 "thread": "nvmf_tgt_poll_group_000", 00:17:55.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:55.466 "listen_address": { 00:17:55.466 "trtype": "TCP", 00:17:55.466 "adrfam": "IPv4", 00:17:55.466 "traddr": "10.0.0.2", 00:17:55.466 "trsvcid": "4420" 00:17:55.466 }, 00:17:55.466 "peer_address": { 00:17:55.466 "trtype": "TCP", 00:17:55.466 "adrfam": "IPv4", 00:17:55.466 "traddr": "10.0.0.1", 00:17:55.466 "trsvcid": "35250" 00:17:55.466 }, 00:17:55.466 "auth": { 00:17:55.466 "state": "completed", 00:17:55.466 "digest": "sha512", 00:17:55.466 "dhgroup": "ffdhe3072" 00:17:55.466 } 00:17:55.466 } 00:17:55.466 ]' 00:17:55.466 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.466 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.466 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.466 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.466 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.725 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.725 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.725 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.725 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:55.725 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:17:56.293 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.293 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.293 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.293 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.293 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.293 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.293 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.293 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.552 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.811 00:17:56.811 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.811 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.811 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.070 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.070 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.070 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.070 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.070 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.070 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.070 { 00:17:57.070 "cntlid": 119, 00:17:57.070 "qid": 0, 00:17:57.070 "state": "enabled", 00:17:57.070 "thread": "nvmf_tgt_poll_group_000", 00:17:57.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:57.070 "listen_address": { 00:17:57.070 "trtype": "TCP", 00:17:57.070 "adrfam": "IPv4", 00:17:57.070 "traddr": "10.0.0.2", 00:17:57.070 "trsvcid": "4420" 00:17:57.070 }, 00:17:57.070 "peer_address": { 00:17:57.070 "trtype": "TCP", 00:17:57.070 "adrfam": "IPv4", 00:17:57.070 "traddr": "10.0.0.1", 00:17:57.070 "trsvcid": "35276" 00:17:57.070 }, 00:17:57.070 "auth": { 00:17:57.070 "state": "completed", 00:17:57.070 "digest": "sha512", 00:17:57.070 "dhgroup": "ffdhe3072" 00:17:57.070 } 00:17:57.070 } 00:17:57.070 ]' 00:17:57.070 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.070 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.070 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.070 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.070 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.329 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.330 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.330 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.330 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:57.330 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:17:57.897 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.897 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.897 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.897 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.897 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.897 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.897 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.897 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.897 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.156 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:58.156 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.156 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.156 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:58.156 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.156 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.157 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.157 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.157 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.157 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.157 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.157 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.157 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.416 00:17:58.416 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.416 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.416 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.675 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.675 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.675 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.675 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.675 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.675 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.675 { 00:17:58.675 "cntlid": 121, 00:17:58.675 "qid": 0, 00:17:58.675 "state": "enabled", 00:17:58.675 "thread": "nvmf_tgt_poll_group_000", 00:17:58.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:58.675 "listen_address": { 00:17:58.675 "trtype": "TCP", 00:17:58.675 "adrfam": "IPv4", 00:17:58.675 "traddr": "10.0.0.2", 00:17:58.675 "trsvcid": "4420" 00:17:58.675 }, 00:17:58.675 "peer_address": { 00:17:58.675 "trtype": "TCP", 00:17:58.675 "adrfam": "IPv4", 00:17:58.675 "traddr": "10.0.0.1", 00:17:58.675 "trsvcid": "35310" 00:17:58.675 }, 00:17:58.675 "auth": { 00:17:58.675 "state": "completed", 00:17:58.675 "digest": "sha512", 00:17:58.675 "dhgroup": "ffdhe4096" 00:17:58.675 } 00:17:58.675 } 00:17:58.675 ]' 00:17:58.675 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.675 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.675 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.675 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.675 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.934 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.934 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.934 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.934 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:58.934 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:17:59.502 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.502 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.502 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.502 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.502 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.502 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.502 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.502 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.762 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.022 00:18:00.022 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.022 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.022 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.282 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.282 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.282 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.282 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.282 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.282 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.282 { 00:18:00.282 "cntlid": 123, 00:18:00.282 "qid": 0, 00:18:00.282 "state": "enabled", 00:18:00.282 "thread": "nvmf_tgt_poll_group_000", 00:18:00.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:00.282 "listen_address": { 00:18:00.282 "trtype": "TCP", 00:18:00.282 "adrfam": "IPv4", 00:18:00.282 "traddr": "10.0.0.2", 00:18:00.282 "trsvcid": "4420" 00:18:00.282 }, 00:18:00.282 "peer_address": { 00:18:00.282 "trtype": "TCP", 00:18:00.282 "adrfam": "IPv4", 00:18:00.282 "traddr": "10.0.0.1", 00:18:00.282 "trsvcid": "35342" 00:18:00.282 }, 00:18:00.282 "auth": { 00:18:00.282 "state": "completed", 00:18:00.282 "digest": "sha512", 00:18:00.282 "dhgroup": "ffdhe4096" 00:18:00.282 } 00:18:00.282 } 00:18:00.282 ]' 00:18:00.282 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.282 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.282 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.282 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.282 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.541 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.541 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.542 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.542 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:18:00.542 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:18:01.109 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.109 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.109 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.109 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.109 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.109 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.109 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.109 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.368 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.627 00:18:01.627 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.627 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.627 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.886 { 00:18:01.886 "cntlid": 125, 00:18:01.886 "qid": 0, 00:18:01.886 "state": "enabled", 00:18:01.886 "thread": "nvmf_tgt_poll_group_000", 00:18:01.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:01.886 "listen_address": { 00:18:01.886 "trtype": "TCP", 00:18:01.886 "adrfam": "IPv4", 00:18:01.886 "traddr": "10.0.0.2", 00:18:01.886 "trsvcid": "4420" 00:18:01.886 }, 00:18:01.886 "peer_address": { 00:18:01.886 "trtype": "TCP", 00:18:01.886 "adrfam": "IPv4", 00:18:01.886 "traddr": "10.0.0.1", 00:18:01.886 "trsvcid": "47072" 00:18:01.886 }, 00:18:01.886 "auth": { 00:18:01.886 "state": "completed", 00:18:01.886 "digest": "sha512", 00:18:01.886 "dhgroup": "ffdhe4096" 00:18:01.886 } 00:18:01.886 } 00:18:01.886 ]' 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.886 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.145 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:18:02.145 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:18:02.713 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.713 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.713 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.713 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.713 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.713 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.713 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.713 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.970 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.229 00:18:03.229 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.229 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.229 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.488 { 00:18:03.488 "cntlid": 127, 00:18:03.488 "qid": 0, 00:18:03.488 "state": "enabled", 00:18:03.488 "thread": "nvmf_tgt_poll_group_000", 00:18:03.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:03.488 "listen_address": { 00:18:03.488 "trtype": "TCP", 00:18:03.488 "adrfam": "IPv4", 00:18:03.488 "traddr": "10.0.0.2", 00:18:03.488 "trsvcid": "4420" 00:18:03.488 }, 00:18:03.488 "peer_address": { 00:18:03.488 "trtype": "TCP", 00:18:03.488 "adrfam": "IPv4", 00:18:03.488 "traddr": "10.0.0.1", 00:18:03.488 "trsvcid": "47100" 00:18:03.488 }, 00:18:03.488 "auth": { 00:18:03.488 "state": "completed", 00:18:03.488 "digest": "sha512", 00:18:03.488 "dhgroup": "ffdhe4096" 00:18:03.488 } 00:18:03.488 } 00:18:03.488 ]' 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.488 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.748 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:18:03.748 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:18:04.316 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.316 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.316 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.316 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.316 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.316 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.316 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.316 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.316 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.575 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:04.575 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.575 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.575 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:04.575 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.575 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.575 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.575 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.575 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.576 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.576 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.576 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.576 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.834 00:18:04.834 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.834 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.834 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.092 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.092 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.092 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.092 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.092 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.092 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.092 { 00:18:05.092 "cntlid": 129, 00:18:05.092 "qid": 0, 00:18:05.092 "state": "enabled", 00:18:05.092 "thread": "nvmf_tgt_poll_group_000", 00:18:05.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:05.092 "listen_address": { 00:18:05.092 "trtype": "TCP", 00:18:05.092 "adrfam": "IPv4", 00:18:05.092 "traddr": "10.0.0.2", 00:18:05.092 "trsvcid": "4420" 00:18:05.092 }, 00:18:05.092 "peer_address": { 00:18:05.092 "trtype": "TCP", 00:18:05.092 "adrfam": "IPv4", 00:18:05.092 "traddr": "10.0.0.1", 00:18:05.092 "trsvcid": "47124" 00:18:05.092 }, 00:18:05.092 "auth": { 00:18:05.092 "state": "completed", 00:18:05.092 "digest": "sha512", 00:18:05.092 "dhgroup": "ffdhe6144" 00:18:05.092 } 00:18:05.092 } 00:18:05.092 ]' 00:18:05.092 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.092 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.092 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.092 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.351 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.351 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.351 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.351 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.351 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:18:05.351 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:18:05.918 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.918 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.918 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.918 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.918 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.918 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.918 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.918 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.178 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.746 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.746 { 00:18:06.746 "cntlid": 131, 00:18:06.746 "qid": 0, 00:18:06.746 "state": "enabled", 00:18:06.746 "thread": "nvmf_tgt_poll_group_000", 00:18:06.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:06.746 "listen_address": { 00:18:06.746 "trtype": "TCP", 00:18:06.746 "adrfam": "IPv4", 00:18:06.746 "traddr": "10.0.0.2", 00:18:06.746 "trsvcid": "4420" 00:18:06.746 }, 00:18:06.746 "peer_address": { 00:18:06.746 "trtype": "TCP", 00:18:06.746 "adrfam": "IPv4", 00:18:06.746 "traddr": "10.0.0.1", 00:18:06.746 "trsvcid": "47142" 00:18:06.746 }, 00:18:06.746 "auth": { 00:18:06.746 "state": "completed", 00:18:06.746 "digest": "sha512", 00:18:06.746 "dhgroup": "ffdhe6144" 00:18:06.746 } 00:18:06.746 } 00:18:06.746 ]' 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.746 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.746 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.005 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.005 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.005 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.005 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.005 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:18:07.005 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:18:07.573 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.573 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.573 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.573 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.573 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.573 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.573 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.573 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.832 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.400 00:18:08.400 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.400 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.400 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.400 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.400 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.400 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.400 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.400 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.400 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.400 { 00:18:08.400 "cntlid": 133, 00:18:08.400 "qid": 0, 00:18:08.400 "state": "enabled", 00:18:08.400 "thread": "nvmf_tgt_poll_group_000", 00:18:08.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:08.400 "listen_address": { 00:18:08.400 "trtype": "TCP", 00:18:08.400 "adrfam": "IPv4", 00:18:08.400 "traddr": "10.0.0.2", 00:18:08.400 "trsvcid": "4420" 00:18:08.400 }, 00:18:08.400 "peer_address": { 00:18:08.400 "trtype": "TCP", 00:18:08.400 "adrfam": "IPv4", 00:18:08.400 "traddr": "10.0.0.1", 00:18:08.400 "trsvcid": "47154" 00:18:08.400 }, 00:18:08.400 "auth": { 00:18:08.400 "state": "completed", 00:18:08.400 "digest": "sha512", 00:18:08.400 "dhgroup": "ffdhe6144" 00:18:08.400 } 00:18:08.400 } 00:18:08.400 ]' 00:18:08.400 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.659 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.659 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.659 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.659 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.659 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.659 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.659 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.937 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:18:08.937 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:18:09.271 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:09.560 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.561 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.561 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.561 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.561 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.561 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.182 00:18:10.182 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.182 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.182 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.182 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.182 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.182 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.182 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.182 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.182 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.182 { 00:18:10.182 "cntlid": 135, 00:18:10.182 "qid": 0, 00:18:10.182 "state": "enabled", 00:18:10.182 "thread": "nvmf_tgt_poll_group_000", 00:18:10.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:10.182 "listen_address": { 00:18:10.182 "trtype": "TCP", 00:18:10.182 "adrfam": "IPv4", 00:18:10.182 "traddr": "10.0.0.2", 00:18:10.182 "trsvcid": "4420" 00:18:10.182 }, 00:18:10.182 "peer_address": { 00:18:10.182 "trtype": "TCP", 00:18:10.182 "adrfam": "IPv4", 00:18:10.183 "traddr": "10.0.0.1", 00:18:10.183 "trsvcid": "47174" 00:18:10.183 }, 00:18:10.183 "auth": { 00:18:10.183 "state": "completed", 00:18:10.183 "digest": "sha512", 00:18:10.183 "dhgroup": "ffdhe6144" 00:18:10.183 } 00:18:10.183 } 00:18:10.183 ]' 00:18:10.183 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.183 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.183 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.183 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.183 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.474 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.474 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.474 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.474 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:18:10.475 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:18:11.103 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.103 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.103 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.103 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.103 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.103 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.103 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.103 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.103 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.388 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.658 00:18:11.968 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.968 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.968 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.968 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.968 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.968 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.968 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.968 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.968 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.968 { 00:18:11.968 "cntlid": 137, 00:18:11.968 "qid": 0, 00:18:11.968 "state": "enabled", 00:18:11.968 "thread": "nvmf_tgt_poll_group_000", 00:18:11.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:11.968 "listen_address": { 00:18:11.968 "trtype": "TCP", 00:18:11.968 "adrfam": "IPv4", 00:18:11.968 "traddr": "10.0.0.2", 00:18:11.968 "trsvcid": "4420" 00:18:11.968 }, 00:18:11.968 "peer_address": { 00:18:11.968 "trtype": "TCP", 00:18:11.968 "adrfam": "IPv4", 00:18:11.968 "traddr": "10.0.0.1", 00:18:11.968 "trsvcid": "47204" 00:18:11.968 }, 00:18:11.968 "auth": { 00:18:11.968 "state": "completed", 00:18:11.968 "digest": "sha512", 00:18:11.968 "dhgroup": "ffdhe8192" 00:18:11.968 } 00:18:11.968 } 00:18:11.968 ]' 00:18:11.968 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.968 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.968 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.968 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.968 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.251 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.251 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.251 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.251 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:18:12.251 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:18:12.865 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.865 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.865 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.865 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.865 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.865 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.865 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.865 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.159 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.728 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.728 { 00:18:13.728 "cntlid": 139, 00:18:13.728 "qid": 0, 00:18:13.728 "state": "enabled", 00:18:13.728 "thread": "nvmf_tgt_poll_group_000", 00:18:13.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:13.728 "listen_address": { 00:18:13.728 "trtype": "TCP", 00:18:13.728 "adrfam": "IPv4", 00:18:13.728 "traddr": "10.0.0.2", 00:18:13.728 "trsvcid": "4420" 00:18:13.728 }, 00:18:13.728 "peer_address": { 00:18:13.728 "trtype": "TCP", 00:18:13.728 "adrfam": "IPv4", 00:18:13.728 "traddr": "10.0.0.1", 00:18:13.728 "trsvcid": "56478" 00:18:13.728 }, 00:18:13.728 "auth": { 00:18:13.728 "state": "completed", 00:18:13.728 "digest": "sha512", 00:18:13.728 "dhgroup": "ffdhe8192" 00:18:13.728 } 00:18:13.728 } 00:18:13.728 ]' 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.728 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.987 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.987 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.987 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.987 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.987 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.245 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:18:14.246 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: --dhchap-ctrl-secret DHHC-1:02:ZTUyMzM2ODhjZWFkMGQzMDgzMzcxYjlmY2RlY2Y5ZmY0ODZhNmRkNmU3ZThlMTNjzBAxdA==: 00:18:14.814 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.815 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:14.815 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.815 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.815 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.815 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.815 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.815 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.815 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.382 00:18:15.382 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.382 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.382 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.642 { 00:18:15.642 "cntlid": 141, 00:18:15.642 "qid": 0, 00:18:15.642 "state": "enabled", 00:18:15.642 "thread": "nvmf_tgt_poll_group_000", 00:18:15.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:15.642 "listen_address": { 00:18:15.642 "trtype": "TCP", 00:18:15.642 "adrfam": "IPv4", 00:18:15.642 "traddr": "10.0.0.2", 00:18:15.642 "trsvcid": "4420" 00:18:15.642 }, 00:18:15.642 "peer_address": { 00:18:15.642 "trtype": "TCP", 00:18:15.642 "adrfam": "IPv4", 00:18:15.642 "traddr": "10.0.0.1", 00:18:15.642 "trsvcid": "56508" 00:18:15.642 }, 00:18:15.642 "auth": { 00:18:15.642 "state": "completed", 00:18:15.642 "digest": "sha512", 00:18:15.642 "dhgroup": "ffdhe8192" 00:18:15.642 } 00:18:15.642 } 00:18:15.642 ]' 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.642 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.901 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:18:15.901 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:01:ODdlOTY4OTYxOTkxY2RiY2Q0NzQwYzNhYTFiNGZhMDiF1NUp: 00:18:16.470 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.470 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.470 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.470 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.470 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.470 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.470 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.470 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.730 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.299 00:18:17.299 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.299 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.299 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.299 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.558 { 00:18:17.558 "cntlid": 143, 00:18:17.558 "qid": 0, 00:18:17.558 "state": "enabled", 00:18:17.558 "thread": "nvmf_tgt_poll_group_000", 00:18:17.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:17.558 "listen_address": { 00:18:17.558 "trtype": "TCP", 00:18:17.558 "adrfam": "IPv4", 00:18:17.558 "traddr": "10.0.0.2", 00:18:17.558 "trsvcid": "4420" 00:18:17.558 }, 00:18:17.558 "peer_address": { 00:18:17.558 "trtype": "TCP", 00:18:17.558 "adrfam": "IPv4", 00:18:17.558 "traddr": "10.0.0.1", 00:18:17.558 "trsvcid": "56526" 00:18:17.558 }, 00:18:17.558 "auth": { 00:18:17.558 "state": "completed", 00:18:17.558 "digest": "sha512", 00:18:17.558 "dhgroup": "ffdhe8192" 00:18:17.558 } 00:18:17.558 } 00:18:17.558 ]' 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.558 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.818 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:18:17.818 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:18:18.386 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.386 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.386 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.386 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.386 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.386 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:18.386 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:18.386 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:18.386 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.386 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.386 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.645 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.213 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.213 { 00:18:19.213 "cntlid": 145, 00:18:19.213 "qid": 0, 00:18:19.213 "state": "enabled", 00:18:19.213 "thread": "nvmf_tgt_poll_group_000", 00:18:19.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:19.213 "listen_address": { 00:18:19.213 "trtype": "TCP", 00:18:19.213 "adrfam": "IPv4", 00:18:19.213 "traddr": "10.0.0.2", 00:18:19.213 "trsvcid": "4420" 00:18:19.213 }, 00:18:19.213 "peer_address": { 00:18:19.213 "trtype": "TCP", 00:18:19.213 "adrfam": "IPv4", 00:18:19.213 "traddr": "10.0.0.1", 00:18:19.213 "trsvcid": "56554" 00:18:19.213 }, 00:18:19.213 "auth": { 00:18:19.213 "state": "completed", 00:18:19.213 "digest": "sha512", 00:18:19.213 "dhgroup": "ffdhe8192" 00:18:19.213 } 00:18:19.213 } 00:18:19.213 ]' 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.213 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.472 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.472 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.472 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.472 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.472 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.472 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:18:19.472 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTNlZDY3MGZhZTEzNDVjNjU1MzMwN2EwZGRlMDc0M2IyMGJhNTljMzUwZjY5ZjkxwRaKIg==: --dhchap-ctrl-secret DHHC-1:03:YzYwMDdhZmE1ZDZmYzFkMWIxMzkxMjllZDkwMDgzNjY4ZjE4NjdkOWY0N2VjMjY5OGI5Y2Q3ZDk5NTIzYjRhN62pg6g=: 00:18:20.040 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:20.300 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:20.560 request: 00:18:20.560 { 00:18:20.560 "name": "nvme0", 00:18:20.560 "trtype": "tcp", 00:18:20.560 "traddr": "10.0.0.2", 00:18:20.560 "adrfam": "ipv4", 00:18:20.560 "trsvcid": "4420", 00:18:20.560 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:20.560 "prchk_reftag": false, 00:18:20.560 "prchk_guard": false, 00:18:20.560 "hdgst": false, 00:18:20.560 "ddgst": false, 00:18:20.560 "dhchap_key": "key2", 00:18:20.560 "allow_unrecognized_csi": false, 00:18:20.560 "method": "bdev_nvme_attach_controller", 00:18:20.560 "req_id": 1 00:18:20.560 } 00:18:20.560 Got JSON-RPC error response 00:18:20.560 response: 00:18:20.560 { 00:18:20.560 "code": -5, 00:18:20.560 "message": "Input/output error" 00:18:20.560 } 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.560 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.128 request: 00:18:21.128 { 00:18:21.128 "name": "nvme0", 00:18:21.128 "trtype": "tcp", 00:18:21.128 "traddr": "10.0.0.2", 00:18:21.128 "adrfam": "ipv4", 00:18:21.128 "trsvcid": "4420", 00:18:21.128 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:21.128 "prchk_reftag": false, 00:18:21.128 "prchk_guard": false, 00:18:21.128 "hdgst": false, 00:18:21.128 "ddgst": false, 00:18:21.128 "dhchap_key": "key1", 00:18:21.128 "dhchap_ctrlr_key": "ckey2", 00:18:21.128 "allow_unrecognized_csi": false, 00:18:21.128 "method": "bdev_nvme_attach_controller", 00:18:21.128 "req_id": 1 00:18:21.128 } 00:18:21.128 Got JSON-RPC error response 00:18:21.128 response: 00:18:21.128 { 00:18:21.128 "code": -5, 00:18:21.128 "message": "Input/output error" 00:18:21.128 } 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.128 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:21.129 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.129 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:21.129 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.129 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.129 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.129 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.696 request: 00:18:21.696 { 00:18:21.696 "name": "nvme0", 00:18:21.696 "trtype": "tcp", 00:18:21.696 "traddr": "10.0.0.2", 00:18:21.696 "adrfam": "ipv4", 00:18:21.696 "trsvcid": "4420", 00:18:21.696 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:21.696 "prchk_reftag": false, 00:18:21.696 "prchk_guard": false, 00:18:21.696 "hdgst": false, 00:18:21.696 "ddgst": false, 00:18:21.696 "dhchap_key": "key1", 00:18:21.696 "dhchap_ctrlr_key": "ckey1", 00:18:21.696 "allow_unrecognized_csi": false, 00:18:21.696 "method": "bdev_nvme_attach_controller", 00:18:21.696 "req_id": 1 00:18:21.696 } 00:18:21.696 Got JSON-RPC error response 00:18:21.696 response: 00:18:21.696 { 00:18:21.696 "code": -5, 00:18:21.696 "message": "Input/output error" 00:18:21.696 } 00:18:21.696 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:21.696 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.696 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.696 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.696 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1342174 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1342174 ']' 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1342174 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1342174 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1342174' 00:18:21.697 killing process with pid 1342174 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1342174 00:18:21.697 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1342174 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1364212 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1364212 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1364212 ']' 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.956 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1364212 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1364212 ']' 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.956 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.215 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.215 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:22.215 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:22.215 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.215 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.475 null0 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.w7p 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.PWA ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PWA 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.I28 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Y2J ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Y2J 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Mdy 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.A09 ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A09 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bxI 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.475 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.412 nvme0n1 00:18:23.412 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.413 { 00:18:23.413 "cntlid": 1, 00:18:23.413 "qid": 0, 00:18:23.413 "state": "enabled", 00:18:23.413 "thread": "nvmf_tgt_poll_group_000", 00:18:23.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:23.413 "listen_address": { 00:18:23.413 "trtype": "TCP", 00:18:23.413 "adrfam": "IPv4", 00:18:23.413 "traddr": "10.0.0.2", 00:18:23.413 "trsvcid": "4420" 00:18:23.413 }, 00:18:23.413 "peer_address": { 00:18:23.413 "trtype": "TCP", 00:18:23.413 "adrfam": "IPv4", 00:18:23.413 "traddr": "10.0.0.1", 00:18:23.413 "trsvcid": "45610" 00:18:23.413 }, 00:18:23.413 "auth": { 00:18:23.413 "state": "completed", 00:18:23.413 "digest": "sha512", 00:18:23.413 "dhgroup": "ffdhe8192" 00:18:23.413 } 00:18:23.413 } 00:18:23.413 ]' 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.413 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.672 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.672 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.672 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.672 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:18:23.672 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:18:24.239 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.239 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.239 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.239 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.240 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.240 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:24.240 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.240 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.498 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.498 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:24.498 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:24.499 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:24.499 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:24.499 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:24.499 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:24.499 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.499 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:24.499 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.499 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.499 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.499 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.757 request: 00:18:24.757 { 00:18:24.757 "name": "nvme0", 00:18:24.757 "trtype": "tcp", 00:18:24.757 "traddr": "10.0.0.2", 00:18:24.757 "adrfam": "ipv4", 00:18:24.757 "trsvcid": "4420", 00:18:24.757 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:24.757 "prchk_reftag": false, 00:18:24.757 "prchk_guard": false, 00:18:24.757 "hdgst": false, 00:18:24.757 "ddgst": false, 00:18:24.757 "dhchap_key": "key3", 00:18:24.757 "allow_unrecognized_csi": false, 00:18:24.757 "method": "bdev_nvme_attach_controller", 00:18:24.757 "req_id": 1 00:18:24.757 } 00:18:24.757 Got JSON-RPC error response 00:18:24.757 response: 00:18:24.757 { 00:18:24.757 "code": -5, 00:18:24.757 "message": "Input/output error" 00:18:24.757 } 00:18:24.757 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:24.757 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.757 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.757 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.757 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:24.757 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:24.757 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:24.757 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:25.017 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:25.017 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:25.017 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:25.017 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:25.017 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.017 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:25.017 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.017 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.017 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.017 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.276 request: 00:18:25.276 { 00:18:25.276 "name": "nvme0", 00:18:25.276 "trtype": "tcp", 00:18:25.276 "traddr": "10.0.0.2", 00:18:25.276 "adrfam": "ipv4", 00:18:25.276 "trsvcid": "4420", 00:18:25.276 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:25.276 "prchk_reftag": false, 00:18:25.276 "prchk_guard": false, 00:18:25.276 "hdgst": false, 00:18:25.276 "ddgst": false, 00:18:25.276 "dhchap_key": "key3", 00:18:25.276 "allow_unrecognized_csi": false, 00:18:25.276 "method": "bdev_nvme_attach_controller", 00:18:25.276 "req_id": 1 00:18:25.276 } 00:18:25.276 Got JSON-RPC error response 00:18:25.276 response: 00:18:25.276 { 00:18:25.276 "code": -5, 00:18:25.276 "message": "Input/output error" 00:18:25.276 } 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.276 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.536 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.795 request: 00:18:25.795 { 00:18:25.795 "name": "nvme0", 00:18:25.795 "trtype": "tcp", 00:18:25.795 "traddr": "10.0.0.2", 00:18:25.795 "adrfam": "ipv4", 00:18:25.795 "trsvcid": "4420", 00:18:25.795 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:25.795 "prchk_reftag": false, 00:18:25.795 "prchk_guard": false, 00:18:25.795 "hdgst": false, 00:18:25.795 "ddgst": false, 00:18:25.795 "dhchap_key": "key0", 00:18:25.795 "dhchap_ctrlr_key": "key1", 00:18:25.795 "allow_unrecognized_csi": false, 00:18:25.795 "method": "bdev_nvme_attach_controller", 00:18:25.795 "req_id": 1 00:18:25.795 } 00:18:25.795 Got JSON-RPC error response 00:18:25.795 response: 00:18:25.795 { 00:18:25.795 "code": -5, 00:18:25.795 "message": "Input/output error" 00:18:25.795 } 00:18:25.795 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:25.795 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.795 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.795 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.795 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:25.795 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:25.795 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:26.055 nvme0n1 00:18:26.055 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:26.055 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:26.055 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.314 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.314 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.314 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.314 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:26.314 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.314 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.314 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.314 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:26.314 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:26.314 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:27.252 nvme0n1 00:18:27.252 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:27.252 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:27.252 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.252 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.252 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.252 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.252 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.511 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.511 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:27.511 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:27.511 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.511 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.511 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:18:27.511 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: --dhchap-ctrl-secret DHHC-1:03:OGZiNmE0YjliYjM5ZDllN2VlZmY1ZjhmNzk1ODE4ZjkwMjQxN2NiZGFiZGFmMjlhOWNmNDVkN2MzNWIzMWIzZQOqQEk=: 00:18:28.079 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:28.079 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:28.079 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:28.079 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:28.079 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:28.079 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:28.079 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:28.079 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.079 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.338 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:28.338 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:28.338 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:28.338 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:28.339 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.339 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:28.339 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.339 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:28.339 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:28.339 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:28.907 request: 00:18:28.907 { 00:18:28.907 "name": "nvme0", 00:18:28.907 "trtype": "tcp", 00:18:28.907 "traddr": "10.0.0.2", 00:18:28.907 "adrfam": "ipv4", 00:18:28.907 "trsvcid": "4420", 00:18:28.907 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:28.907 "prchk_reftag": false, 00:18:28.907 "prchk_guard": false, 00:18:28.907 "hdgst": false, 00:18:28.907 "ddgst": false, 00:18:28.907 "dhchap_key": "key1", 00:18:28.907 "allow_unrecognized_csi": false, 00:18:28.907 "method": "bdev_nvme_attach_controller", 00:18:28.907 "req_id": 1 00:18:28.907 } 00:18:28.907 Got JSON-RPC error response 00:18:28.907 response: 00:18:28.907 { 00:18:28.907 "code": -5, 00:18:28.907 "message": "Input/output error" 00:18:28.907 } 00:18:28.907 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.907 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.907 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.907 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.907 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.907 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.907 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.476 nvme0n1 00:18:29.476 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:29.476 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:29.476 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.734 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.734 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.734 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.992 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:29.992 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.992 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.992 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.992 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:29.992 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:29.992 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:30.249 nvme0n1 00:18:30.250 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:30.250 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:30.250 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.508 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.508 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.508 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: '' 2s 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: ]] 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmE5MjY0ZThkNWU1MjBmZDNhNjhkMTU1Zjc5ODJmYjByjmi0: 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:30.766 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: 2s 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: ]] 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NjJmM2Q5YTRmMzZhNjM5YmMwOWI4NzJiNGNhNDkyNzg2MTY4MjM2ZDUzYjc3YTQyErGN6A==: 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:32.666 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:35.200 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:35.200 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:35.200 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.201 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.460 nvme0n1 00:18:35.460 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.460 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.460 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.460 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.460 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.460 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.028 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:36.028 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:36.028 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.288 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.288 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:36.288 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.288 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.288 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.288 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:36.288 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.547 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:37.116 request: 00:18:37.116 { 00:18:37.116 "name": "nvme0", 00:18:37.116 "dhchap_key": "key1", 00:18:37.116 "dhchap_ctrlr_key": "key3", 00:18:37.116 "method": "bdev_nvme_set_keys", 00:18:37.116 "req_id": 1 00:18:37.116 } 00:18:37.116 Got JSON-RPC error response 00:18:37.116 response: 00:18:37.116 { 00:18:37.116 "code": -13, 00:18:37.116 "message": "Permission denied" 00:18:37.116 } 00:18:37.116 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:37.116 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.116 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.116 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.116 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:37.116 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:37.116 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.376 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:37.376 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:38.314 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:38.314 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:38.314 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.574 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:38.574 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.574 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.574 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.574 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.574 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:38.574 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:38.574 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:39.513 nvme0n1 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.513 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.772 request: 00:18:39.772 { 00:18:39.772 "name": "nvme0", 00:18:39.772 "dhchap_key": "key2", 00:18:39.772 "dhchap_ctrlr_key": "key0", 00:18:39.772 "method": "bdev_nvme_set_keys", 00:18:39.772 "req_id": 1 00:18:39.772 } 00:18:39.772 Got JSON-RPC error response 00:18:39.772 response: 00:18:39.772 { 00:18:39.772 "code": -13, 00:18:39.772 "message": "Permission denied" 00:18:39.772 } 00:18:39.772 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:39.772 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.772 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.772 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.772 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:39.772 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:39.772 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.031 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:40.031 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:40.969 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:40.969 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:40.969 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1342209 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1342209 ']' 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1342209 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1342209 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1342209' 00:18:41.228 killing process with pid 1342209 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1342209 00:18:41.228 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1342209 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:41.488 rmmod nvme_tcp 00:18:41.488 rmmod nvme_fabrics 00:18:41.488 rmmod nvme_keyring 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1364212 ']' 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1364212 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1364212 ']' 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1364212 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.488 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1364212 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1364212' 00:18:41.748 killing process with pid 1364212 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1364212 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1364212 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.748 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.w7p /tmp/spdk.key-sha256.I28 /tmp/spdk.key-sha384.Mdy /tmp/spdk.key-sha512.bxI /tmp/spdk.key-sha512.PWA /tmp/spdk.key-sha384.Y2J /tmp/spdk.key-sha256.A09 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:44.286 00:18:44.286 real 2m32.083s 00:18:44.286 user 5m50.909s 00:18:44.286 sys 0m24.034s 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.286 ************************************ 00:18:44.286 END TEST nvmf_auth_target 00:18:44.286 ************************************ 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:44.286 ************************************ 00:18:44.286 START TEST nvmf_bdevio_no_huge 00:18:44.286 ************************************ 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:44.286 * Looking for test storage... 00:18:44.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:44.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.286 --rc genhtml_branch_coverage=1 00:18:44.286 --rc genhtml_function_coverage=1 00:18:44.286 --rc genhtml_legend=1 00:18:44.286 --rc geninfo_all_blocks=1 00:18:44.286 --rc geninfo_unexecuted_blocks=1 00:18:44.286 00:18:44.286 ' 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:44.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.286 --rc genhtml_branch_coverage=1 00:18:44.286 --rc genhtml_function_coverage=1 00:18:44.286 --rc genhtml_legend=1 00:18:44.286 --rc geninfo_all_blocks=1 00:18:44.286 --rc geninfo_unexecuted_blocks=1 00:18:44.286 00:18:44.286 ' 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:44.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.286 --rc genhtml_branch_coverage=1 00:18:44.286 --rc genhtml_function_coverage=1 00:18:44.286 --rc genhtml_legend=1 00:18:44.286 --rc geninfo_all_blocks=1 00:18:44.286 --rc geninfo_unexecuted_blocks=1 00:18:44.286 00:18:44.286 ' 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:44.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.286 --rc genhtml_branch_coverage=1 00:18:44.286 --rc genhtml_function_coverage=1 00:18:44.286 --rc genhtml_legend=1 00:18:44.286 --rc geninfo_all_blocks=1 00:18:44.286 --rc geninfo_unexecuted_blocks=1 00:18:44.286 00:18:44.286 ' 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.286 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:44.287 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:49.565 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:49.565 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.565 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:49.566 Found net devices under 0000:86:00.0: cvl_0_0 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:49.566 Found net devices under 0000:86:00.1: cvl_0_1 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:49.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:18:49.566 00:18:49.566 --- 10.0.0.2 ping statistics --- 00:18:49.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.566 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:18:49.566 00:18:49.566 --- 10.0.0.1 ping statistics --- 00:18:49.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.566 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1371089 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1371089 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1371089 ']' 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.566 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.566 [2024-11-28 08:17:31.521092] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:18:49.566 [2024-11-28 08:17:31.521135] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:49.566 [2024-11-28 08:17:31.592861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.566 [2024-11-28 08:17:31.640205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.566 [2024-11-28 08:17:31.640242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.566 [2024-11-28 08:17:31.640249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.566 [2024-11-28 08:17:31.640255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.566 [2024-11-28 08:17:31.640260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.566 [2024-11-28 08:17:31.641432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:49.566 [2024-11-28 08:17:31.641542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:49.566 [2024-11-28 08:17:31.641647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.566 [2024-11-28 08:17:31.641648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:50.135 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.135 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:50.135 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.135 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.135 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.135 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.135 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:50.135 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.135 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.135 [2024-11-28 08:17:32.401520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.395 Malloc0 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.395 [2024-11-28 08:17:32.437788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:50.395 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:50.395 { 00:18:50.395 "params": { 00:18:50.395 "name": "Nvme$subsystem", 00:18:50.396 "trtype": "$TEST_TRANSPORT", 00:18:50.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:50.396 "adrfam": "ipv4", 00:18:50.396 "trsvcid": "$NVMF_PORT", 00:18:50.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:50.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:50.396 "hdgst": ${hdgst:-false}, 00:18:50.396 "ddgst": ${ddgst:-false} 00:18:50.396 }, 00:18:50.396 "method": "bdev_nvme_attach_controller" 00:18:50.396 } 00:18:50.396 EOF 00:18:50.396 )") 00:18:50.396 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:50.396 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:50.396 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:50.396 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:50.396 "params": { 00:18:50.396 "name": "Nvme1", 00:18:50.396 "trtype": "tcp", 00:18:50.396 "traddr": "10.0.0.2", 00:18:50.396 "adrfam": "ipv4", 00:18:50.396 "trsvcid": "4420", 00:18:50.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.396 "hdgst": false, 00:18:50.396 "ddgst": false 00:18:50.396 }, 00:18:50.396 "method": "bdev_nvme_attach_controller" 00:18:50.396 }' 00:18:50.396 [2024-11-28 08:17:32.480511] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:18:50.396 [2024-11-28 08:17:32.480555] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1371128 ] 00:18:50.396 [2024-11-28 08:17:32.548016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:50.396 [2024-11-28 08:17:32.598063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.396 [2024-11-28 08:17:32.598158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.396 [2024-11-28 08:17:32.598159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.655 I/O targets: 00:18:50.655 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:50.655 00:18:50.655 00:18:50.655 CUnit - A unit testing framework for C - Version 2.1-3 00:18:50.655 http://cunit.sourceforge.net/ 00:18:50.655 00:18:50.655 00:18:50.655 Suite: bdevio tests on: Nvme1n1 00:18:50.914 Test: blockdev write read block ...passed 00:18:50.914 Test: blockdev write zeroes read block ...passed 00:18:50.914 Test: blockdev write zeroes read no split ...passed 00:18:50.914 Test: blockdev write zeroes read split ...passed 00:18:50.915 Test: blockdev write zeroes read split partial ...passed 00:18:50.915 Test: blockdev reset ...[2024-11-28 08:17:33.003839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:50.915 [2024-11-28 08:17:33.003903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12198e0 (9): Bad file descriptor 00:18:50.915 [2024-11-28 08:17:33.023892] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:50.915 passed 00:18:50.915 Test: blockdev write read 8 blocks ...passed 00:18:50.915 Test: blockdev write read size > 128k ...passed 00:18:50.915 Test: blockdev write read invalid size ...passed 00:18:50.915 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:50.915 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:50.915 Test: blockdev write read max offset ...passed 00:18:51.174 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:51.174 Test: blockdev writev readv 8 blocks ...passed 00:18:51.174 Test: blockdev writev readv 30 x 1block ...passed 00:18:51.174 Test: blockdev writev readv block ...passed 00:18:51.174 Test: blockdev writev readv size > 128k ...passed 00:18:51.174 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:51.174 Test: blockdev comparev and writev ...[2024-11-28 08:17:33.234602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:51.174 [2024-11-28 08:17:33.234630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.174 [2024-11-28 08:17:33.234644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:51.174 [2024-11-28 08:17:33.234653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:51.174 [2024-11-28 08:17:33.234897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:51.174 [2024-11-28 08:17:33.234908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:51.174 [2024-11-28 08:17:33.234920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:51.174 [2024-11-28 08:17:33.234928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:51.174 [2024-11-28 08:17:33.235172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:51.174 [2024-11-28 08:17:33.235183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:51.174 [2024-11-28 08:17:33.235195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:51.174 [2024-11-28 08:17:33.235202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:51.174 [2024-11-28 08:17:33.235443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:51.174 [2024-11-28 08:17:33.235454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.174 [2024-11-28 08:17:33.235474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:51.174 [2024-11-28 08:17:33.235482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:51.174 passed 00:18:51.174 Test: blockdev nvme passthru rw ...passed 00:18:51.174 Test: blockdev nvme passthru vendor specific ...[2024-11-28 08:17:33.317306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:51.174 [2024-11-28 08:17:33.317323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:51.174 [2024-11-28 08:17:33.317433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:51.174 [2024-11-28 08:17:33.317443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:51.174 [2024-11-28 08:17:33.317551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:51.174 [2024-11-28 08:17:33.317561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:51.175 [2024-11-28 08:17:33.317668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:51.175 [2024-11-28 08:17:33.317678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:51.175 passed 00:18:51.175 Test: blockdev nvme admin passthru ...passed 00:18:51.175 Test: blockdev copy ...passed 00:18:51.175 00:18:51.175 Run Summary: Type Total Ran Passed Failed Inactive 00:18:51.175 suites 1 1 n/a 0 0 00:18:51.175 tests 23 23 23 0 0 00:18:51.175 asserts 152 152 152 0 n/a 00:18:51.175 00:18:51.175 Elapsed time = 0.981 seconds 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:51.434 rmmod nvme_tcp 00:18:51.434 rmmod nvme_fabrics 00:18:51.434 rmmod nvme_keyring 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1371089 ']' 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1371089 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1371089 ']' 00:18:51.434 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1371089 00:18:51.693 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:51.693 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.693 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1371089 00:18:51.693 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:51.693 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:51.693 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1371089' 00:18:51.693 killing process with pid 1371089 00:18:51.693 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1371089 00:18:51.693 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1371089 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.953 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:54.490 00:18:54.490 real 0m10.058s 00:18:54.490 user 0m13.180s 00:18:54.490 sys 0m4.869s 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.490 ************************************ 00:18:54.490 END TEST nvmf_bdevio_no_huge 00:18:54.490 ************************************ 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:54.490 ************************************ 00:18:54.490 START TEST nvmf_tls 00:18:54.490 ************************************ 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:54.490 * Looking for test storage... 00:18:54.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:54.490 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.491 --rc genhtml_branch_coverage=1 00:18:54.491 --rc genhtml_function_coverage=1 00:18:54.491 --rc genhtml_legend=1 00:18:54.491 --rc geninfo_all_blocks=1 00:18:54.491 --rc geninfo_unexecuted_blocks=1 00:18:54.491 00:18:54.491 ' 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.491 --rc genhtml_branch_coverage=1 00:18:54.491 --rc genhtml_function_coverage=1 00:18:54.491 --rc genhtml_legend=1 00:18:54.491 --rc geninfo_all_blocks=1 00:18:54.491 --rc geninfo_unexecuted_blocks=1 00:18:54.491 00:18:54.491 ' 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.491 --rc genhtml_branch_coverage=1 00:18:54.491 --rc genhtml_function_coverage=1 00:18:54.491 --rc genhtml_legend=1 00:18:54.491 --rc geninfo_all_blocks=1 00:18:54.491 --rc geninfo_unexecuted_blocks=1 00:18:54.491 00:18:54.491 ' 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.491 --rc genhtml_branch_coverage=1 00:18:54.491 --rc genhtml_function_coverage=1 00:18:54.491 --rc genhtml_legend=1 00:18:54.491 --rc geninfo_all_blocks=1 00:18:54.491 --rc geninfo_unexecuted_blocks=1 00:18:54.491 00:18:54.491 ' 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:54.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:54.491 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:59.766 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:59.766 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:59.766 Found net devices under 0000:86:00.0: cvl_0_0 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:59.766 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:59.767 Found net devices under 0000:86:00.1: cvl_0_1 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:59.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:18:59.767 00:18:59.767 --- 10.0.0.2 ping statistics --- 00:18:59.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.767 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:18:59.767 00:18:59.767 --- 10.0.0.1 ping statistics --- 00:18:59.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.767 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1374875 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1374875 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1374875 ']' 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.767 [2024-11-28 08:17:41.535022] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:18:59.767 [2024-11-28 08:17:41.535069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.767 [2024-11-28 08:17:41.603703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.767 [2024-11-28 08:17:41.645440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.767 [2024-11-28 08:17:41.645476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.767 [2024-11-28 08:17:41.645483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.767 [2024-11-28 08:17:41.645488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.767 [2024-11-28 08:17:41.645494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.767 [2024-11-28 08:17:41.646064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:59.767 true 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.767 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:00.027 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:00.027 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:00.027 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:00.286 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.286 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:00.286 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:00.286 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:00.286 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:00.545 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.545 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:00.804 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:00.804 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:00.804 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.804 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:00.804 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:00.804 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:00.804 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:01.063 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:01.063 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:01.322 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:01.322 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:01.322 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:01.581 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:01.840 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:01.840 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:01.840 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.iFpordaGHp 00:19:01.840 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:01.840 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Ene3nLod55 00:19:01.840 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:01.840 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:01.840 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.iFpordaGHp 00:19:01.840 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Ene3nLod55 00:19:01.840 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:01.840 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:02.099 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.iFpordaGHp 00:19:02.099 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iFpordaGHp 00:19:02.099 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:02.358 [2024-11-28 08:17:44.509753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.358 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:02.616 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:02.876 [2024-11-28 08:17:44.886730] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:02.876 [2024-11-28 08:17:44.886936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.876 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:02.876 malloc0 00:19:02.876 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.135 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iFpordaGHp 00:19:03.393 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:03.393 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.iFpordaGHp 00:19:15.601 Initializing NVMe Controllers 00:19:15.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:15.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:15.601 Initialization complete. Launching workers. 00:19:15.601 ======================================================== 00:19:15.601 Latency(us) 00:19:15.601 Device Information : IOPS MiB/s Average min max 00:19:15.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16320.06 63.75 3921.65 854.41 5375.97 00:19:15.601 ======================================================== 00:19:15.601 Total : 16320.06 63.75 3921.65 854.41 5375.97 00:19:15.601 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iFpordaGHp 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iFpordaGHp 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1377228 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1377228 /var/tmp/bdevperf.sock 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1377228 ']' 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.601 [2024-11-28 08:17:55.799473] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:15.601 [2024-11-28 08:17:55.799523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377228 ] 00:19:15.601 [2024-11-28 08:17:55.856932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.601 [2024-11-28 08:17:55.899349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:15.601 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iFpordaGHp 00:19:15.601 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:15.601 [2024-11-28 08:17:56.335140] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.601 TLSTESTn1 00:19:15.601 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:15.601 Running I/O for 10 seconds... 00:19:16.538 4893.00 IOPS, 19.11 MiB/s [2024-11-28T07:17:59.742Z] 4491.50 IOPS, 17.54 MiB/s [2024-11-28T07:18:00.678Z] 4340.67 IOPS, 16.96 MiB/s [2024-11-28T07:18:01.614Z] 4253.00 IOPS, 16.61 MiB/s [2024-11-28T07:18:02.547Z] 4153.00 IOPS, 16.22 MiB/s [2024-11-28T07:18:03.923Z] 4087.50 IOPS, 15.97 MiB/s [2024-11-28T07:18:04.860Z] 4048.00 IOPS, 15.81 MiB/s [2024-11-28T07:18:05.796Z] 4018.62 IOPS, 15.70 MiB/s [2024-11-28T07:18:06.734Z] 3988.44 IOPS, 15.58 MiB/s [2024-11-28T07:18:06.734Z] 3967.70 IOPS, 15.50 MiB/s 00:19:24.465 Latency(us) 00:19:24.465 [2024-11-28T07:18:06.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.465 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:24.465 Verification LBA range: start 0x0 length 0x2000 00:19:24.465 TLSTESTn1 : 10.03 3968.12 15.50 0.00 0.00 32194.15 6952.51 40575.33 00:19:24.465 [2024-11-28T07:18:06.734Z] =================================================================================================================== 00:19:24.465 [2024-11-28T07:18:06.734Z] Total : 3968.12 15.50 0.00 0.00 32194.15 6952.51 40575.33 00:19:24.465 { 00:19:24.465 "results": [ 00:19:24.465 { 00:19:24.465 "job": "TLSTESTn1", 00:19:24.465 "core_mask": "0x4", 00:19:24.465 "workload": "verify", 00:19:24.465 "status": "finished", 00:19:24.465 "verify_range": { 00:19:24.465 "start": 0, 00:19:24.465 "length": 8192 00:19:24.465 }, 00:19:24.465 "queue_depth": 128, 00:19:24.465 "io_size": 4096, 00:19:24.465 "runtime": 10.030948, 00:19:24.465 "iops": 3968.1194638831744, 00:19:24.465 "mibps": 15.50046665579365, 00:19:24.465 "io_failed": 0, 00:19:24.465 "io_timeout": 0, 00:19:24.465 "avg_latency_us": 32194.146951366038, 00:19:24.465 "min_latency_us": 6952.514782608696, 00:19:24.465 "max_latency_us": 40575.33217391304 00:19:24.465 } 00:19:24.465 ], 00:19:24.465 "core_count": 1 00:19:24.465 } 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1377228 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1377228 ']' 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1377228 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1377228 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1377228' 00:19:24.465 killing process with pid 1377228 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1377228 00:19:24.465 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.465 00:19:24.465 Latency(us) 00:19:24.465 [2024-11-28T07:18:06.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.465 [2024-11-28T07:18:06.734Z] =================================================================================================================== 00:19:24.465 [2024-11-28T07:18:06.734Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.465 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1377228 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ene3nLod55 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ene3nLod55 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ene3nLod55 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ene3nLod55 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1379186 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1379186 /var/tmp/bdevperf.sock 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1379186 ']' 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.725 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.725 [2024-11-28 08:18:06.836889] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:24.725 [2024-11-28 08:18:06.836938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379186 ] 00:19:24.725 [2024-11-28 08:18:06.896326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.725 [2024-11-28 08:18:06.933905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.984 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.984 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:24.984 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ene3nLod55 00:19:24.984 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:25.243 [2024-11-28 08:18:07.382610] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.243 [2024-11-28 08:18:07.390550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:25.243 [2024-11-28 08:18:07.391112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e71a0 (107): Transport endpoint is not connected 00:19:25.243 [2024-11-28 08:18:07.392106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e71a0 (9): Bad file descriptor 00:19:25.243 [2024-11-28 08:18:07.393107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:25.243 [2024-11-28 08:18:07.393118] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:25.243 [2024-11-28 08:18:07.393126] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:25.243 [2024-11-28 08:18:07.393135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:25.243 request: 00:19:25.243 { 00:19:25.243 "name": "TLSTEST", 00:19:25.243 "trtype": "tcp", 00:19:25.243 "traddr": "10.0.0.2", 00:19:25.243 "adrfam": "ipv4", 00:19:25.243 "trsvcid": "4420", 00:19:25.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.243 "prchk_reftag": false, 00:19:25.243 "prchk_guard": false, 00:19:25.243 "hdgst": false, 00:19:25.243 "ddgst": false, 00:19:25.243 "psk": "key0", 00:19:25.243 "allow_unrecognized_csi": false, 00:19:25.243 "method": "bdev_nvme_attach_controller", 00:19:25.243 "req_id": 1 00:19:25.243 } 00:19:25.243 Got JSON-RPC error response 00:19:25.243 response: 00:19:25.243 { 00:19:25.243 "code": -5, 00:19:25.243 "message": "Input/output error" 00:19:25.244 } 00:19:25.244 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1379186 00:19:25.244 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1379186 ']' 00:19:25.244 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1379186 00:19:25.244 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.244 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.244 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379186 00:19:25.244 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:25.244 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:25.244 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379186' 00:19:25.244 killing process with pid 1379186 00:19:25.244 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1379186 00:19:25.244 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.244 00:19:25.244 Latency(us) 00:19:25.244 [2024-11-28T07:18:07.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.244 [2024-11-28T07:18:07.513Z] =================================================================================================================== 00:19:25.244 [2024-11-28T07:18:07.513Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.244 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1379186 00:19:25.503 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:25.503 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:25.503 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.503 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.503 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.503 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iFpordaGHp 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iFpordaGHp 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iFpordaGHp 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iFpordaGHp 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1379223 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1379223 /var/tmp/bdevperf.sock 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1379223 ']' 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.504 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.504 [2024-11-28 08:18:07.659001] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:25.504 [2024-11-28 08:18:07.659053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379223 ] 00:19:25.504 [2024-11-28 08:18:07.719068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.504 [2024-11-28 08:18:07.759059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.763 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.763 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.763 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iFpordaGHp 00:19:26.023 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:26.023 [2024-11-28 08:18:08.216083] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.023 [2024-11-28 08:18:08.222584] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:26.023 [2024-11-28 08:18:08.222607] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:26.023 [2024-11-28 08:18:08.222630] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:26.023 [2024-11-28 08:18:08.223506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c91a0 (107): Transport endpoint is not connected 00:19:26.023 [2024-11-28 08:18:08.224501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c91a0 (9): Bad file descriptor 00:19:26.023 [2024-11-28 08:18:08.225502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:26.023 [2024-11-28 08:18:08.225512] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:26.023 [2024-11-28 08:18:08.225519] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:26.023 [2024-11-28 08:18:08.225527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:26.023 request: 00:19:26.023 { 00:19:26.023 "name": "TLSTEST", 00:19:26.023 "trtype": "tcp", 00:19:26.023 "traddr": "10.0.0.2", 00:19:26.023 "adrfam": "ipv4", 00:19:26.023 "trsvcid": "4420", 00:19:26.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.023 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:26.023 "prchk_reftag": false, 00:19:26.023 "prchk_guard": false, 00:19:26.023 "hdgst": false, 00:19:26.023 "ddgst": false, 00:19:26.023 "psk": "key0", 00:19:26.023 "allow_unrecognized_csi": false, 00:19:26.023 "method": "bdev_nvme_attach_controller", 00:19:26.023 "req_id": 1 00:19:26.023 } 00:19:26.023 Got JSON-RPC error response 00:19:26.023 response: 00:19:26.023 { 00:19:26.023 "code": -5, 00:19:26.023 "message": "Input/output error" 00:19:26.023 } 00:19:26.023 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1379223 00:19:26.023 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1379223 ']' 00:19:26.023 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1379223 00:19:26.023 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.023 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.023 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379223 00:19:26.282 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:26.282 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:26.282 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379223' 00:19:26.282 killing process with pid 1379223 00:19:26.282 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1379223 00:19:26.282 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.282 00:19:26.282 Latency(us) 00:19:26.282 [2024-11-28T07:18:08.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.282 [2024-11-28T07:18:08.551Z] =================================================================================================================== 00:19:26.282 [2024-11-28T07:18:08.551Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.282 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1379223 00:19:26.282 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iFpordaGHp 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iFpordaGHp 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iFpordaGHp 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iFpordaGHp 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1379535 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1379535 /var/tmp/bdevperf.sock 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1379535 ']' 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.283 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.283 [2024-11-28 08:18:08.498894] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:26.283 [2024-11-28 08:18:08.498945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379535 ] 00:19:26.542 [2024-11-28 08:18:08.557370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.542 [2024-11-28 08:18:08.600196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.542 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.542 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.542 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iFpordaGHp 00:19:26.800 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:26.800 [2024-11-28 08:18:09.036402] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.801 [2024-11-28 08:18:09.044684] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:26.801 [2024-11-28 08:18:09.044706] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:26.801 [2024-11-28 08:18:09.044745] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:26.801 [2024-11-28 08:18:09.045676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22dd1a0 (107): Transport endpoint is not connected 00:19:26.801 [2024-11-28 08:18:09.046668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22dd1a0 (9): Bad file descriptor 00:19:26.801 [2024-11-28 08:18:09.047670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:26.801 [2024-11-28 08:18:09.047680] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:26.801 [2024-11-28 08:18:09.047687] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:26.801 [2024-11-28 08:18:09.047695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:26.801 request: 00:19:26.801 { 00:19:26.801 "name": "TLSTEST", 00:19:26.801 "trtype": "tcp", 00:19:26.801 "traddr": "10.0.0.2", 00:19:26.801 "adrfam": "ipv4", 00:19:26.801 "trsvcid": "4420", 00:19:26.801 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:26.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.801 "prchk_reftag": false, 00:19:26.801 "prchk_guard": false, 00:19:26.801 "hdgst": false, 00:19:26.801 "ddgst": false, 00:19:26.801 "psk": "key0", 00:19:26.801 "allow_unrecognized_csi": false, 00:19:26.801 "method": "bdev_nvme_attach_controller", 00:19:26.801 "req_id": 1 00:19:26.801 } 00:19:26.801 Got JSON-RPC error response 00:19:26.801 response: 00:19:26.801 { 00:19:26.801 "code": -5, 00:19:26.801 "message": "Input/output error" 00:19:26.801 } 00:19:26.801 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1379535 00:19:26.801 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1379535 ']' 00:19:26.801 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1379535 00:19:26.801 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379535 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379535' 00:19:27.060 killing process with pid 1379535 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1379535 00:19:27.060 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.060 00:19:27.060 Latency(us) 00:19:27.060 [2024-11-28T07:18:09.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.060 [2024-11-28T07:18:09.329Z] =================================================================================================================== 00:19:27.060 [2024-11-28T07:18:09.329Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1379535 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1379864 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1379864 /var/tmp/bdevperf.sock 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1379864 ']' 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.060 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.060 [2024-11-28 08:18:09.317646] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:27.060 [2024-11-28 08:18:09.317696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379864 ] 00:19:27.319 [2024-11-28 08:18:09.376199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.319 [2024-11-28 08:18:09.419058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.319 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.319 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:27.319 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:27.579 [2024-11-28 08:18:09.671957] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:27.579 [2024-11-28 08:18:09.671986] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:27.579 request: 00:19:27.579 { 00:19:27.579 "name": "key0", 00:19:27.579 "path": "", 00:19:27.579 "method": "keyring_file_add_key", 00:19:27.579 "req_id": 1 00:19:27.579 } 00:19:27.579 Got JSON-RPC error response 00:19:27.579 response: 00:19:27.579 { 00:19:27.579 "code": -1, 00:19:27.579 "message": "Operation not permitted" 00:19:27.579 } 00:19:27.579 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.838 [2024-11-28 08:18:09.872566] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.838 [2024-11-28 08:18:09.872595] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:27.838 request: 00:19:27.838 { 00:19:27.838 "name": "TLSTEST", 00:19:27.838 "trtype": "tcp", 00:19:27.838 "traddr": "10.0.0.2", 00:19:27.838 "adrfam": "ipv4", 00:19:27.838 "trsvcid": "4420", 00:19:27.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.838 "prchk_reftag": false, 00:19:27.838 "prchk_guard": false, 00:19:27.838 "hdgst": false, 00:19:27.838 "ddgst": false, 00:19:27.838 "psk": "key0", 00:19:27.838 "allow_unrecognized_csi": false, 00:19:27.838 "method": "bdev_nvme_attach_controller", 00:19:27.838 "req_id": 1 00:19:27.838 } 00:19:27.838 Got JSON-RPC error response 00:19:27.838 response: 00:19:27.838 { 00:19:27.838 "code": -126, 00:19:27.838 "message": "Required key not available" 00:19:27.838 } 00:19:27.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1379864 00:19:27.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1379864 ']' 00:19:27.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1379864 00:19:27.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379864 00:19:27.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:27.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:27.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379864' 00:19:27.839 killing process with pid 1379864 00:19:27.839 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1379864 00:19:27.839 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.839 00:19:27.839 Latency(us) 00:19:27.839 [2024-11-28T07:18:10.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.839 [2024-11-28T07:18:10.108Z] =================================================================================================================== 00:19:27.839 [2024-11-28T07:18:10.108Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.839 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1379864 00:19:27.839 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:27.839 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:27.839 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.839 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.839 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.839 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1374875 00:19:27.839 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1374875 ']' 00:19:27.839 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1374875 00:19:27.839 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.098 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.098 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1374875 00:19:28.098 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:28.098 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:28.098 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1374875' 00:19:28.098 killing process with pid 1374875 00:19:28.098 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1374875 00:19:28.098 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1374875 00:19:28.098 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:28.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:28.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:28.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:28.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:28.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:28.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.FsRSIrfvHn 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.FsRSIrfvHn 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1380091 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1380091 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1380091 ']' 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.358 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.358 [2024-11-28 08:18:10.437976] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:28.358 [2024-11-28 08:18:10.438025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.358 [2024-11-28 08:18:10.505397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.358 [2024-11-28 08:18:10.546653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.358 [2024-11-28 08:18:10.546691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.358 [2024-11-28 08:18:10.546699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.358 [2024-11-28 08:18:10.546706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.358 [2024-11-28 08:18:10.546712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.358 [2024-11-28 08:18:10.547305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.621 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.621 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.621 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.621 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.621 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.621 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.621 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.FsRSIrfvHn 00:19:28.621 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FsRSIrfvHn 00:19:28.621 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.621 [2024-11-28 08:18:10.852353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.621 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:28.878 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:29.136 [2024-11-28 08:18:11.217279] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.136 [2024-11-28 08:18:11.217481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.136 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:29.394 malloc0 00:19:29.394 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.394 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FsRSIrfvHn 00:19:29.653 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FsRSIrfvHn 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FsRSIrfvHn 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1380466 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1380466 /var/tmp/bdevperf.sock 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1380466 ']' 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.913 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.913 [2024-11-28 08:18:11.989040] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:29.913 [2024-11-28 08:18:11.989090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380466 ] 00:19:29.913 [2024-11-28 08:18:12.048262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.913 [2024-11-28 08:18:12.088788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.913 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.913 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:29.913 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FsRSIrfvHn 00:19:30.172 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.430 [2024-11-28 08:18:12.512852] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.430 TLSTESTn1 00:19:30.430 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:30.430 Running I/O for 10 seconds... 00:19:32.745 5445.00 IOPS, 21.27 MiB/s [2024-11-28T07:18:15.956Z] 5456.50 IOPS, 21.31 MiB/s [2024-11-28T07:18:16.894Z] 5408.67 IOPS, 21.13 MiB/s [2024-11-28T07:18:17.831Z] 5445.75 IOPS, 21.27 MiB/s [2024-11-28T07:18:18.767Z] 5463.20 IOPS, 21.34 MiB/s [2024-11-28T07:18:19.704Z] 5461.83 IOPS, 21.34 MiB/s [2024-11-28T07:18:21.080Z] 5438.43 IOPS, 21.24 MiB/s [2024-11-28T07:18:22.016Z] 5444.38 IOPS, 21.27 MiB/s [2024-11-28T07:18:22.953Z] 5443.89 IOPS, 21.27 MiB/s [2024-11-28T07:18:22.953Z] 5444.70 IOPS, 21.27 MiB/s 00:19:40.684 Latency(us) 00:19:40.684 [2024-11-28T07:18:22.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.684 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.684 Verification LBA range: start 0x0 length 0x2000 00:19:40.684 TLSTESTn1 : 10.02 5447.96 21.28 0.00 0.00 23456.09 4900.95 22111.28 00:19:40.684 [2024-11-28T07:18:22.953Z] =================================================================================================================== 00:19:40.684 [2024-11-28T07:18:22.953Z] Total : 5447.96 21.28 0.00 0.00 23456.09 4900.95 22111.28 00:19:40.685 { 00:19:40.685 "results": [ 00:19:40.685 { 00:19:40.685 "job": "TLSTESTn1", 00:19:40.685 "core_mask": "0x4", 00:19:40.685 "workload": "verify", 00:19:40.685 "status": "finished", 00:19:40.685 "verify_range": { 00:19:40.685 "start": 0, 00:19:40.685 "length": 8192 00:19:40.685 }, 00:19:40.685 "queue_depth": 128, 00:19:40.685 "io_size": 4096, 00:19:40.685 "runtime": 10.017331, 00:19:40.685 "iops": 5447.9581437410825, 00:19:40.685 "mibps": 21.281086498988603, 00:19:40.685 "io_failed": 0, 00:19:40.685 "io_timeout": 0, 00:19:40.685 "avg_latency_us": 23456.085159838814, 00:19:40.685 "min_latency_us": 4900.953043478261, 00:19:40.685 "max_latency_us": 22111.27652173913 00:19:40.685 } 00:19:40.685 ], 00:19:40.685 "core_count": 1 00:19:40.685 } 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1380466 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1380466 ']' 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1380466 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1380466 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1380466' 00:19:40.685 killing process with pid 1380466 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1380466 00:19:40.685 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.685 00:19:40.685 Latency(us) 00:19:40.685 [2024-11-28T07:18:22.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.685 [2024-11-28T07:18:22.954Z] =================================================================================================================== 00:19:40.685 [2024-11-28T07:18:22.954Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.685 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1380466 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.FsRSIrfvHn 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FsRSIrfvHn 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FsRSIrfvHn 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FsRSIrfvHn 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FsRSIrfvHn 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1382181 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1382181 /var/tmp/bdevperf.sock 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1382181 ']' 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.945 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.945 [2024-11-28 08:18:23.010929] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:40.945 [2024-11-28 08:18:23.010992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382181 ] 00:19:40.945 [2024-11-28 08:18:23.069732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.945 [2024-11-28 08:18:23.107230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.945 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.945 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:40.945 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FsRSIrfvHn 00:19:41.204 [2024-11-28 08:18:23.374891] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FsRSIrfvHn': 0100666 00:19:41.204 [2024-11-28 08:18:23.374921] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:41.204 request: 00:19:41.204 { 00:19:41.204 "name": "key0", 00:19:41.204 "path": "/tmp/tmp.FsRSIrfvHn", 00:19:41.204 "method": "keyring_file_add_key", 00:19:41.204 "req_id": 1 00:19:41.204 } 00:19:41.204 Got JSON-RPC error response 00:19:41.204 response: 00:19:41.204 { 00:19:41.204 "code": -1, 00:19:41.204 "message": "Operation not permitted" 00:19:41.204 } 00:19:41.204 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:41.463 [2024-11-28 08:18:23.563457] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.463 [2024-11-28 08:18:23.563486] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:41.463 request: 00:19:41.463 { 00:19:41.463 "name": "TLSTEST", 00:19:41.463 "trtype": "tcp", 00:19:41.463 "traddr": "10.0.0.2", 00:19:41.463 "adrfam": "ipv4", 00:19:41.463 "trsvcid": "4420", 00:19:41.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.463 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.463 "prchk_reftag": false, 00:19:41.463 "prchk_guard": false, 00:19:41.463 "hdgst": false, 00:19:41.463 "ddgst": false, 00:19:41.463 "psk": "key0", 00:19:41.463 "allow_unrecognized_csi": false, 00:19:41.463 "method": "bdev_nvme_attach_controller", 00:19:41.463 "req_id": 1 00:19:41.463 } 00:19:41.463 Got JSON-RPC error response 00:19:41.463 response: 00:19:41.463 { 00:19:41.463 "code": -126, 00:19:41.463 "message": "Required key not available" 00:19:41.463 } 00:19:41.463 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1382181 00:19:41.463 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1382181 ']' 00:19:41.463 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1382181 00:19:41.463 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:41.463 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.463 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1382181 00:19:41.463 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:41.463 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:41.463 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1382181' 00:19:41.463 killing process with pid 1382181 00:19:41.463 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1382181 00:19:41.463 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.463 00:19:41.463 Latency(us) 00:19:41.463 [2024-11-28T07:18:23.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.463 [2024-11-28T07:18:23.732Z] =================================================================================================================== 00:19:41.463 [2024-11-28T07:18:23.732Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:41.463 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1382181 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1380091 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1380091 ']' 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1380091 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1380091 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1380091' 00:19:41.723 killing process with pid 1380091 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1380091 00:19:41.723 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1380091 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1382418 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1382418 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1382418 ']' 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.982 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.982 [2024-11-28 08:18:24.065109] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:41.982 [2024-11-28 08:18:24.065159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.982 [2024-11-28 08:18:24.131071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.982 [2024-11-28 08:18:24.169879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.982 [2024-11-28 08:18:24.169912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.982 [2024-11-28 08:18:24.169920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.982 [2024-11-28 08:18:24.169927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.982 [2024-11-28 08:18:24.169932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.982 [2024-11-28 08:18:24.170422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.FsRSIrfvHn 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.FsRSIrfvHn 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.FsRSIrfvHn 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FsRSIrfvHn 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:42.241 [2024-11-28 08:18:24.475002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.241 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:42.500 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:42.759 [2024-11-28 08:18:24.855988] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:42.759 [2024-11-28 08:18:24.856197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.759 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:43.016 malloc0 00:19:43.016 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:43.016 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FsRSIrfvHn 00:19:43.275 [2024-11-28 08:18:25.413513] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FsRSIrfvHn': 0100666 00:19:43.275 [2024-11-28 08:18:25.413538] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:43.275 request: 00:19:43.275 { 00:19:43.275 "name": "key0", 00:19:43.275 "path": "/tmp/tmp.FsRSIrfvHn", 00:19:43.275 "method": "keyring_file_add_key", 00:19:43.275 "req_id": 1 00:19:43.275 } 00:19:43.275 Got JSON-RPC error response 00:19:43.275 response: 00:19:43.275 { 00:19:43.275 "code": -1, 00:19:43.275 "message": "Operation not permitted" 00:19:43.275 } 00:19:43.275 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.534 [2024-11-28 08:18:25.606037] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:43.534 [2024-11-28 08:18:25.606067] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:43.534 request: 00:19:43.534 { 00:19:43.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.534 "host": "nqn.2016-06.io.spdk:host1", 00:19:43.534 "psk": "key0", 00:19:43.534 "method": "nvmf_subsystem_add_host", 00:19:43.534 "req_id": 1 00:19:43.534 } 00:19:43.534 Got JSON-RPC error response 00:19:43.534 response: 00:19:43.534 { 00:19:43.534 "code": -32603, 00:19:43.534 "message": "Internal error" 00:19:43.534 } 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1382418 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1382418 ']' 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1382418 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1382418 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1382418' 00:19:43.534 killing process with pid 1382418 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1382418 00:19:43.534 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1382418 00:19:43.794 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.FsRSIrfvHn 00:19:43.794 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1382704 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1382704 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1382704 ']' 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.795 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.795 [2024-11-28 08:18:25.900829] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:43.795 [2024-11-28 08:18:25.900875] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.795 [2024-11-28 08:18:25.968187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.795 [2024-11-28 08:18:26.009102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.795 [2024-11-28 08:18:26.009139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.795 [2024-11-28 08:18:26.009147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.795 [2024-11-28 08:18:26.009153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.795 [2024-11-28 08:18:26.009158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.795 [2024-11-28 08:18:26.009740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.054 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.054 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:44.055 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.055 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.055 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.055 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.055 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.FsRSIrfvHn 00:19:44.055 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FsRSIrfvHn 00:19:44.055 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:44.055 [2024-11-28 08:18:26.315788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.314 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:44.314 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:44.573 [2024-11-28 08:18:26.692740] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.573 [2024-11-28 08:18:26.692937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.573 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:44.831 malloc0 00:19:44.831 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:44.831 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FsRSIrfvHn 00:19:45.091 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.351 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1383070 00:19:45.351 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.351 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.351 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1383070 /var/tmp/bdevperf.sock 00:19:45.351 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1383070 ']' 00:19:45.351 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.351 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.351 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.351 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.351 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.351 [2024-11-28 08:18:27.497403] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:45.351 [2024-11-28 08:18:27.497456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383070 ] 00:19:45.351 [2024-11-28 08:18:27.556297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.351 [2024-11-28 08:18:27.597215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.611 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.611 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.611 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FsRSIrfvHn 00:19:45.611 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.870 [2024-11-28 08:18:28.025391] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.870 TLSTESTn1 00:19:45.870 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:46.130 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:46.130 "subsystems": [ 00:19:46.130 { 00:19:46.130 "subsystem": "keyring", 00:19:46.130 "config": [ 00:19:46.130 { 00:19:46.130 "method": "keyring_file_add_key", 00:19:46.130 "params": { 00:19:46.130 "name": "key0", 00:19:46.130 "path": "/tmp/tmp.FsRSIrfvHn" 00:19:46.130 } 00:19:46.130 } 00:19:46.130 ] 00:19:46.130 }, 00:19:46.130 { 00:19:46.130 "subsystem": "iobuf", 00:19:46.130 "config": [ 00:19:46.130 { 00:19:46.130 "method": "iobuf_set_options", 00:19:46.130 "params": { 00:19:46.130 "small_pool_count": 8192, 00:19:46.130 "large_pool_count": 1024, 00:19:46.130 "small_bufsize": 8192, 00:19:46.130 "large_bufsize": 135168, 00:19:46.130 "enable_numa": false 00:19:46.130 } 00:19:46.130 } 00:19:46.130 ] 00:19:46.130 }, 00:19:46.130 { 00:19:46.130 "subsystem": "sock", 00:19:46.130 "config": [ 00:19:46.130 { 00:19:46.130 "method": "sock_set_default_impl", 00:19:46.130 "params": { 00:19:46.130 "impl_name": "posix" 00:19:46.130 } 00:19:46.130 }, 00:19:46.130 { 00:19:46.130 "method": "sock_impl_set_options", 00:19:46.130 "params": { 00:19:46.130 "impl_name": "ssl", 00:19:46.130 "recv_buf_size": 4096, 00:19:46.130 "send_buf_size": 4096, 00:19:46.130 "enable_recv_pipe": true, 00:19:46.130 "enable_quickack": false, 00:19:46.130 "enable_placement_id": 0, 00:19:46.130 "enable_zerocopy_send_server": true, 00:19:46.130 "enable_zerocopy_send_client": false, 00:19:46.130 "zerocopy_threshold": 0, 00:19:46.130 "tls_version": 0, 00:19:46.130 "enable_ktls": false 00:19:46.130 } 00:19:46.130 }, 00:19:46.130 { 00:19:46.130 "method": "sock_impl_set_options", 00:19:46.130 "params": { 00:19:46.130 "impl_name": "posix", 00:19:46.130 "recv_buf_size": 2097152, 00:19:46.130 "send_buf_size": 2097152, 00:19:46.130 "enable_recv_pipe": true, 00:19:46.130 "enable_quickack": false, 00:19:46.130 "enable_placement_id": 0, 00:19:46.130 "enable_zerocopy_send_server": true, 00:19:46.130 "enable_zerocopy_send_client": false, 00:19:46.130 "zerocopy_threshold": 0, 00:19:46.130 "tls_version": 0, 00:19:46.130 "enable_ktls": false 00:19:46.130 } 00:19:46.130 } 00:19:46.130 ] 00:19:46.130 }, 00:19:46.130 { 00:19:46.130 "subsystem": "vmd", 00:19:46.130 "config": [] 00:19:46.130 }, 00:19:46.130 { 00:19:46.130 "subsystem": "accel", 00:19:46.130 "config": [ 00:19:46.130 { 00:19:46.130 "method": "accel_set_options", 00:19:46.130 "params": { 00:19:46.130 "small_cache_size": 128, 00:19:46.130 "large_cache_size": 16, 00:19:46.130 "task_count": 2048, 00:19:46.130 "sequence_count": 2048, 00:19:46.130 "buf_count": 2048 00:19:46.130 } 00:19:46.130 } 00:19:46.130 ] 00:19:46.130 }, 00:19:46.130 { 00:19:46.130 "subsystem": "bdev", 00:19:46.130 "config": [ 00:19:46.130 { 00:19:46.130 "method": "bdev_set_options", 00:19:46.130 "params": { 00:19:46.130 "bdev_io_pool_size": 65535, 00:19:46.130 "bdev_io_cache_size": 256, 00:19:46.130 "bdev_auto_examine": true, 00:19:46.130 "iobuf_small_cache_size": 128, 00:19:46.130 "iobuf_large_cache_size": 16 00:19:46.130 } 00:19:46.130 }, 00:19:46.130 { 00:19:46.130 "method": "bdev_raid_set_options", 00:19:46.130 "params": { 00:19:46.130 "process_window_size_kb": 1024, 00:19:46.130 "process_max_bandwidth_mb_sec": 0 00:19:46.130 } 00:19:46.130 }, 00:19:46.130 { 00:19:46.130 "method": "bdev_iscsi_set_options", 00:19:46.130 "params": { 00:19:46.130 "timeout_sec": 30 00:19:46.130 } 00:19:46.130 }, 00:19:46.130 { 00:19:46.130 "method": "bdev_nvme_set_options", 00:19:46.130 "params": { 00:19:46.130 "action_on_timeout": "none", 00:19:46.130 "timeout_us": 0, 00:19:46.130 "timeout_admin_us": 0, 00:19:46.130 "keep_alive_timeout_ms": 10000, 00:19:46.130 "arbitration_burst": 0, 00:19:46.130 "low_priority_weight": 0, 00:19:46.130 "medium_priority_weight": 0, 00:19:46.130 "high_priority_weight": 0, 00:19:46.130 "nvme_adminq_poll_period_us": 10000, 00:19:46.130 "nvme_ioq_poll_period_us": 0, 00:19:46.130 "io_queue_requests": 0, 00:19:46.130 "delay_cmd_submit": true, 00:19:46.130 "transport_retry_count": 4, 00:19:46.130 "bdev_retry_count": 3, 00:19:46.130 "transport_ack_timeout": 0, 00:19:46.130 "ctrlr_loss_timeout_sec": 0, 00:19:46.130 "reconnect_delay_sec": 0, 00:19:46.130 "fast_io_fail_timeout_sec": 0, 00:19:46.130 "disable_auto_failback": false, 00:19:46.130 "generate_uuids": false, 00:19:46.130 "transport_tos": 0, 00:19:46.130 "nvme_error_stat": false, 00:19:46.130 "rdma_srq_size": 0, 00:19:46.130 "io_path_stat": false, 00:19:46.130 "allow_accel_sequence": false, 00:19:46.130 "rdma_max_cq_size": 0, 00:19:46.130 "rdma_cm_event_timeout_ms": 0, 00:19:46.130 "dhchap_digests": [ 00:19:46.130 "sha256", 00:19:46.131 "sha384", 00:19:46.131 "sha512" 00:19:46.131 ], 00:19:46.131 "dhchap_dhgroups": [ 00:19:46.131 "null", 00:19:46.131 "ffdhe2048", 00:19:46.131 "ffdhe3072", 00:19:46.131 "ffdhe4096", 00:19:46.131 "ffdhe6144", 00:19:46.131 "ffdhe8192" 00:19:46.131 ] 00:19:46.131 } 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "method": "bdev_nvme_set_hotplug", 00:19:46.131 "params": { 00:19:46.131 "period_us": 100000, 00:19:46.131 "enable": false 00:19:46.131 } 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "method": "bdev_malloc_create", 00:19:46.131 "params": { 00:19:46.131 "name": "malloc0", 00:19:46.131 "num_blocks": 8192, 00:19:46.131 "block_size": 4096, 00:19:46.131 "physical_block_size": 4096, 00:19:46.131 "uuid": "f9f097ae-12a5-416f-8fb3-5dc3b3dfdb8a", 00:19:46.131 "optimal_io_boundary": 0, 00:19:46.131 "md_size": 0, 00:19:46.131 "dif_type": 0, 00:19:46.131 "dif_is_head_of_md": false, 00:19:46.131 "dif_pi_format": 0 00:19:46.131 } 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "method": "bdev_wait_for_examine" 00:19:46.131 } 00:19:46.131 ] 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "subsystem": "nbd", 00:19:46.131 "config": [] 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "subsystem": "scheduler", 00:19:46.131 "config": [ 00:19:46.131 { 00:19:46.131 "method": "framework_set_scheduler", 00:19:46.131 "params": { 00:19:46.131 "name": "static" 00:19:46.131 } 00:19:46.131 } 00:19:46.131 ] 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "subsystem": "nvmf", 00:19:46.131 "config": [ 00:19:46.131 { 00:19:46.131 "method": "nvmf_set_config", 00:19:46.131 "params": { 00:19:46.131 "discovery_filter": "match_any", 00:19:46.131 "admin_cmd_passthru": { 00:19:46.131 "identify_ctrlr": false 00:19:46.131 }, 00:19:46.131 "dhchap_digests": [ 00:19:46.131 "sha256", 00:19:46.131 "sha384", 00:19:46.131 "sha512" 00:19:46.131 ], 00:19:46.131 "dhchap_dhgroups": [ 00:19:46.131 "null", 00:19:46.131 "ffdhe2048", 00:19:46.131 "ffdhe3072", 00:19:46.131 "ffdhe4096", 00:19:46.131 "ffdhe6144", 00:19:46.131 "ffdhe8192" 00:19:46.131 ] 00:19:46.131 } 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "method": "nvmf_set_max_subsystems", 00:19:46.131 "params": { 00:19:46.131 "max_subsystems": 1024 00:19:46.131 } 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "method": "nvmf_set_crdt", 00:19:46.131 "params": { 00:19:46.131 "crdt1": 0, 00:19:46.131 "crdt2": 0, 00:19:46.131 "crdt3": 0 00:19:46.131 } 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "method": "nvmf_create_transport", 00:19:46.131 "params": { 00:19:46.131 "trtype": "TCP", 00:19:46.131 "max_queue_depth": 128, 00:19:46.131 "max_io_qpairs_per_ctrlr": 127, 00:19:46.131 "in_capsule_data_size": 4096, 00:19:46.131 "max_io_size": 131072, 00:19:46.131 "io_unit_size": 131072, 00:19:46.131 "max_aq_depth": 128, 00:19:46.131 "num_shared_buffers": 511, 00:19:46.131 "buf_cache_size": 4294967295, 00:19:46.131 "dif_insert_or_strip": false, 00:19:46.131 "zcopy": false, 00:19:46.131 "c2h_success": false, 00:19:46.131 "sock_priority": 0, 00:19:46.131 "abort_timeout_sec": 1, 00:19:46.131 "ack_timeout": 0, 00:19:46.131 "data_wr_pool_size": 0 00:19:46.131 } 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "method": "nvmf_create_subsystem", 00:19:46.131 "params": { 00:19:46.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.131 "allow_any_host": false, 00:19:46.131 "serial_number": "SPDK00000000000001", 00:19:46.131 "model_number": "SPDK bdev Controller", 00:19:46.131 "max_namespaces": 10, 00:19:46.131 "min_cntlid": 1, 00:19:46.131 "max_cntlid": 65519, 00:19:46.131 "ana_reporting": false 00:19:46.131 } 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "method": "nvmf_subsystem_add_host", 00:19:46.131 "params": { 00:19:46.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.131 "host": "nqn.2016-06.io.spdk:host1", 00:19:46.131 "psk": "key0" 00:19:46.131 } 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "method": "nvmf_subsystem_add_ns", 00:19:46.131 "params": { 00:19:46.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.131 "namespace": { 00:19:46.131 "nsid": 1, 00:19:46.131 "bdev_name": "malloc0", 00:19:46.131 "nguid": "F9F097AE12A5416F8FB35DC3B3DFDB8A", 00:19:46.131 "uuid": "f9f097ae-12a5-416f-8fb3-5dc3b3dfdb8a", 00:19:46.131 "no_auto_visible": false 00:19:46.131 } 00:19:46.131 } 00:19:46.131 }, 00:19:46.131 { 00:19:46.131 "method": "nvmf_subsystem_add_listener", 00:19:46.131 "params": { 00:19:46.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.131 "listen_address": { 00:19:46.131 "trtype": "TCP", 00:19:46.131 "adrfam": "IPv4", 00:19:46.131 "traddr": "10.0.0.2", 00:19:46.131 "trsvcid": "4420" 00:19:46.131 }, 00:19:46.131 "secure_channel": true 00:19:46.131 } 00:19:46.131 } 00:19:46.131 ] 00:19:46.131 } 00:19:46.131 ] 00:19:46.131 }' 00:19:46.131 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:46.390 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:46.390 "subsystems": [ 00:19:46.390 { 00:19:46.390 "subsystem": "keyring", 00:19:46.390 "config": [ 00:19:46.390 { 00:19:46.390 "method": "keyring_file_add_key", 00:19:46.390 "params": { 00:19:46.390 "name": "key0", 00:19:46.390 "path": "/tmp/tmp.FsRSIrfvHn" 00:19:46.390 } 00:19:46.390 } 00:19:46.390 ] 00:19:46.390 }, 00:19:46.390 { 00:19:46.390 "subsystem": "iobuf", 00:19:46.390 "config": [ 00:19:46.390 { 00:19:46.390 "method": "iobuf_set_options", 00:19:46.390 "params": { 00:19:46.390 "small_pool_count": 8192, 00:19:46.390 "large_pool_count": 1024, 00:19:46.390 "small_bufsize": 8192, 00:19:46.390 "large_bufsize": 135168, 00:19:46.390 "enable_numa": false 00:19:46.390 } 00:19:46.390 } 00:19:46.390 ] 00:19:46.390 }, 00:19:46.390 { 00:19:46.390 "subsystem": "sock", 00:19:46.390 "config": [ 00:19:46.390 { 00:19:46.390 "method": "sock_set_default_impl", 00:19:46.390 "params": { 00:19:46.390 "impl_name": "posix" 00:19:46.390 } 00:19:46.390 }, 00:19:46.390 { 00:19:46.390 "method": "sock_impl_set_options", 00:19:46.391 "params": { 00:19:46.391 "impl_name": "ssl", 00:19:46.391 "recv_buf_size": 4096, 00:19:46.391 "send_buf_size": 4096, 00:19:46.391 "enable_recv_pipe": true, 00:19:46.391 "enable_quickack": false, 00:19:46.391 "enable_placement_id": 0, 00:19:46.391 "enable_zerocopy_send_server": true, 00:19:46.391 "enable_zerocopy_send_client": false, 00:19:46.391 "zerocopy_threshold": 0, 00:19:46.391 "tls_version": 0, 00:19:46.391 "enable_ktls": false 00:19:46.391 } 00:19:46.391 }, 00:19:46.391 { 00:19:46.391 "method": "sock_impl_set_options", 00:19:46.391 "params": { 00:19:46.391 "impl_name": "posix", 00:19:46.391 "recv_buf_size": 2097152, 00:19:46.391 "send_buf_size": 2097152, 00:19:46.391 "enable_recv_pipe": true, 00:19:46.391 "enable_quickack": false, 00:19:46.391 "enable_placement_id": 0, 00:19:46.391 "enable_zerocopy_send_server": true, 00:19:46.391 "enable_zerocopy_send_client": false, 00:19:46.391 "zerocopy_threshold": 0, 00:19:46.391 "tls_version": 0, 00:19:46.391 "enable_ktls": false 00:19:46.391 } 00:19:46.391 } 00:19:46.391 ] 00:19:46.391 }, 00:19:46.391 { 00:19:46.391 "subsystem": "vmd", 00:19:46.391 "config": [] 00:19:46.391 }, 00:19:46.391 { 00:19:46.391 "subsystem": "accel", 00:19:46.391 "config": [ 00:19:46.391 { 00:19:46.391 "method": "accel_set_options", 00:19:46.391 "params": { 00:19:46.391 "small_cache_size": 128, 00:19:46.391 "large_cache_size": 16, 00:19:46.391 "task_count": 2048, 00:19:46.391 "sequence_count": 2048, 00:19:46.391 "buf_count": 2048 00:19:46.391 } 00:19:46.391 } 00:19:46.391 ] 00:19:46.391 }, 00:19:46.391 { 00:19:46.391 "subsystem": "bdev", 00:19:46.391 "config": [ 00:19:46.391 { 00:19:46.391 "method": "bdev_set_options", 00:19:46.391 "params": { 00:19:46.391 "bdev_io_pool_size": 65535, 00:19:46.391 "bdev_io_cache_size": 256, 00:19:46.391 "bdev_auto_examine": true, 00:19:46.391 "iobuf_small_cache_size": 128, 00:19:46.391 "iobuf_large_cache_size": 16 00:19:46.391 } 00:19:46.391 }, 00:19:46.391 { 00:19:46.391 "method": "bdev_raid_set_options", 00:19:46.391 "params": { 00:19:46.391 "process_window_size_kb": 1024, 00:19:46.391 "process_max_bandwidth_mb_sec": 0 00:19:46.391 } 00:19:46.391 }, 00:19:46.391 { 00:19:46.391 "method": "bdev_iscsi_set_options", 00:19:46.391 "params": { 00:19:46.391 "timeout_sec": 30 00:19:46.391 } 00:19:46.391 }, 00:19:46.391 { 00:19:46.391 "method": "bdev_nvme_set_options", 00:19:46.391 "params": { 00:19:46.391 "action_on_timeout": "none", 00:19:46.391 "timeout_us": 0, 00:19:46.391 "timeout_admin_us": 0, 00:19:46.391 "keep_alive_timeout_ms": 10000, 00:19:46.391 "arbitration_burst": 0, 00:19:46.391 "low_priority_weight": 0, 00:19:46.391 "medium_priority_weight": 0, 00:19:46.391 "high_priority_weight": 0, 00:19:46.391 "nvme_adminq_poll_period_us": 10000, 00:19:46.391 "nvme_ioq_poll_period_us": 0, 00:19:46.391 "io_queue_requests": 512, 00:19:46.391 "delay_cmd_submit": true, 00:19:46.391 "transport_retry_count": 4, 00:19:46.391 "bdev_retry_count": 3, 00:19:46.391 "transport_ack_timeout": 0, 00:19:46.391 "ctrlr_loss_timeout_sec": 0, 00:19:46.391 "reconnect_delay_sec": 0, 00:19:46.391 "fast_io_fail_timeout_sec": 0, 00:19:46.391 "disable_auto_failback": false, 00:19:46.391 "generate_uuids": false, 00:19:46.391 "transport_tos": 0, 00:19:46.391 "nvme_error_stat": false, 00:19:46.391 "rdma_srq_size": 0, 00:19:46.391 "io_path_stat": false, 00:19:46.391 "allow_accel_sequence": false, 00:19:46.391 "rdma_max_cq_size": 0, 00:19:46.391 "rdma_cm_event_timeout_ms": 0, 00:19:46.391 "dhchap_digests": [ 00:19:46.391 "sha256", 00:19:46.391 "sha384", 00:19:46.391 "sha512" 00:19:46.391 ], 00:19:46.391 "dhchap_dhgroups": [ 00:19:46.391 "null", 00:19:46.391 "ffdhe2048", 00:19:46.391 "ffdhe3072", 00:19:46.391 "ffdhe4096", 00:19:46.391 "ffdhe6144", 00:19:46.391 "ffdhe8192" 00:19:46.391 ] 00:19:46.391 } 00:19:46.391 }, 00:19:46.391 { 00:19:46.391 "method": "bdev_nvme_attach_controller", 00:19:46.391 "params": { 00:19:46.391 "name": "TLSTEST", 00:19:46.391 "trtype": "TCP", 00:19:46.391 "adrfam": "IPv4", 00:19:46.391 "traddr": "10.0.0.2", 00:19:46.391 "trsvcid": "4420", 00:19:46.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.391 "prchk_reftag": false, 00:19:46.391 "prchk_guard": false, 00:19:46.391 "ctrlr_loss_timeout_sec": 0, 00:19:46.391 "reconnect_delay_sec": 0, 00:19:46.391 "fast_io_fail_timeout_sec": 0, 00:19:46.391 "psk": "key0", 00:19:46.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.391 "hdgst": false, 00:19:46.391 "ddgst": false, 00:19:46.391 "multipath": "multipath" 00:19:46.391 } 00:19:46.391 }, 00:19:46.391 { 00:19:46.391 "method": "bdev_nvme_set_hotplug", 00:19:46.391 "params": { 00:19:46.391 "period_us": 100000, 00:19:46.391 "enable": false 00:19:46.391 } 00:19:46.391 }, 00:19:46.391 { 00:19:46.391 "method": "bdev_wait_for_examine" 00:19:46.391 } 00:19:46.391 ] 00:19:46.391 }, 00:19:46.391 { 00:19:46.391 "subsystem": "nbd", 00:19:46.391 "config": [] 00:19:46.391 } 00:19:46.391 ] 00:19:46.391 }' 00:19:46.391 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1383070 00:19:46.391 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1383070 ']' 00:19:46.391 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1383070 00:19:46.391 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:46.391 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.391 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383070 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383070' 00:19:46.651 killing process with pid 1383070 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1383070 00:19:46.651 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.651 00:19:46.651 Latency(us) 00:19:46.651 [2024-11-28T07:18:28.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.651 [2024-11-28T07:18:28.920Z] =================================================================================================================== 00:19:46.651 [2024-11-28T07:18:28.920Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1383070 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1382704 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1382704 ']' 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1382704 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1382704 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1382704' 00:19:46.651 killing process with pid 1382704 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1382704 00:19:46.651 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1382704 00:19:46.912 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:46.912 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:46.912 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.912 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:46.912 "subsystems": [ 00:19:46.912 { 00:19:46.912 "subsystem": "keyring", 00:19:46.912 "config": [ 00:19:46.912 { 00:19:46.912 "method": "keyring_file_add_key", 00:19:46.912 "params": { 00:19:46.912 "name": "key0", 00:19:46.912 "path": "/tmp/tmp.FsRSIrfvHn" 00:19:46.912 } 00:19:46.912 } 00:19:46.912 ] 00:19:46.912 }, 00:19:46.912 { 00:19:46.912 "subsystem": "iobuf", 00:19:46.912 "config": [ 00:19:46.912 { 00:19:46.912 "method": "iobuf_set_options", 00:19:46.912 "params": { 00:19:46.912 "small_pool_count": 8192, 00:19:46.912 "large_pool_count": 1024, 00:19:46.912 "small_bufsize": 8192, 00:19:46.912 "large_bufsize": 135168, 00:19:46.912 "enable_numa": false 00:19:46.912 } 00:19:46.912 } 00:19:46.912 ] 00:19:46.912 }, 00:19:46.912 { 00:19:46.912 "subsystem": "sock", 00:19:46.912 "config": [ 00:19:46.912 { 00:19:46.912 "method": "sock_set_default_impl", 00:19:46.912 "params": { 00:19:46.912 "impl_name": "posix" 00:19:46.912 } 00:19:46.912 }, 00:19:46.912 { 00:19:46.912 "method": "sock_impl_set_options", 00:19:46.912 "params": { 00:19:46.912 "impl_name": "ssl", 00:19:46.912 "recv_buf_size": 4096, 00:19:46.912 "send_buf_size": 4096, 00:19:46.912 "enable_recv_pipe": true, 00:19:46.912 "enable_quickack": false, 00:19:46.912 "enable_placement_id": 0, 00:19:46.912 "enable_zerocopy_send_server": true, 00:19:46.912 "enable_zerocopy_send_client": false, 00:19:46.912 "zerocopy_threshold": 0, 00:19:46.912 "tls_version": 0, 00:19:46.912 "enable_ktls": false 00:19:46.912 } 00:19:46.912 }, 00:19:46.912 { 00:19:46.912 "method": "sock_impl_set_options", 00:19:46.912 "params": { 00:19:46.912 "impl_name": "posix", 00:19:46.912 "recv_buf_size": 2097152, 00:19:46.912 "send_buf_size": 2097152, 00:19:46.912 "enable_recv_pipe": true, 00:19:46.912 "enable_quickack": false, 00:19:46.912 "enable_placement_id": 0, 00:19:46.912 "enable_zerocopy_send_server": true, 00:19:46.912 "enable_zerocopy_send_client": false, 00:19:46.912 "zerocopy_threshold": 0, 00:19:46.912 "tls_version": 0, 00:19:46.912 "enable_ktls": false 00:19:46.912 } 00:19:46.912 } 00:19:46.912 ] 00:19:46.912 }, 00:19:46.912 { 00:19:46.912 "subsystem": "vmd", 00:19:46.912 "config": [] 00:19:46.912 }, 00:19:46.912 { 00:19:46.912 "subsystem": "accel", 00:19:46.912 "config": [ 00:19:46.912 { 00:19:46.912 "method": "accel_set_options", 00:19:46.912 "params": { 00:19:46.912 "small_cache_size": 128, 00:19:46.912 "large_cache_size": 16, 00:19:46.912 "task_count": 2048, 00:19:46.912 "sequence_count": 2048, 00:19:46.912 "buf_count": 2048 00:19:46.912 } 00:19:46.912 } 00:19:46.912 ] 00:19:46.912 }, 00:19:46.912 { 00:19:46.912 "subsystem": "bdev", 00:19:46.912 "config": [ 00:19:46.912 { 00:19:46.912 "method": "bdev_set_options", 00:19:46.912 "params": { 00:19:46.912 "bdev_io_pool_size": 65535, 00:19:46.912 "bdev_io_cache_size": 256, 00:19:46.912 "bdev_auto_examine": true, 00:19:46.913 "iobuf_small_cache_size": 128, 00:19:46.913 "iobuf_large_cache_size": 16 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "bdev_raid_set_options", 00:19:46.913 "params": { 00:19:46.913 "process_window_size_kb": 1024, 00:19:46.913 "process_max_bandwidth_mb_sec": 0 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "bdev_iscsi_set_options", 00:19:46.913 "params": { 00:19:46.913 "timeout_sec": 30 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "bdev_nvme_set_options", 00:19:46.913 "params": { 00:19:46.913 "action_on_timeout": "none", 00:19:46.913 "timeout_us": 0, 00:19:46.913 "timeout_admin_us": 0, 00:19:46.913 "keep_alive_timeout_ms": 10000, 00:19:46.913 "arbitration_burst": 0, 00:19:46.913 "low_priority_weight": 0, 00:19:46.913 "medium_priority_weight": 0, 00:19:46.913 "high_priority_weight": 0, 00:19:46.913 "nvme_adminq_poll_period_us": 10000, 00:19:46.913 "nvme_ioq_poll_period_us": 0, 00:19:46.913 "io_queue_requests": 0, 00:19:46.913 "delay_cmd_submit": true, 00:19:46.913 "transport_retry_count": 4, 00:19:46.913 "bdev_retry_count": 3, 00:19:46.913 "transport_ack_timeout": 0, 00:19:46.913 "ctrlr_loss_timeout_sec": 0, 00:19:46.913 "reconnect_delay_sec": 0, 00:19:46.913 "fast_io_fail_timeout_sec": 0, 00:19:46.913 "disable_auto_failback": false, 00:19:46.913 "generate_uuids": false, 00:19:46.913 "transport_tos": 0, 00:19:46.913 "nvme_error_stat": false, 00:19:46.913 "rdma_srq_size": 0, 00:19:46.913 "io_path_stat": false, 00:19:46.913 "allow_accel_sequence": false, 00:19:46.913 "rdma_max_cq_size": 0, 00:19:46.913 "rdma_cm_event_timeout_ms": 0, 00:19:46.913 "dhchap_digests": [ 00:19:46.913 "sha256", 00:19:46.913 "sha384", 00:19:46.913 "sha512" 00:19:46.913 ], 00:19:46.913 "dhchap_dhgroups": [ 00:19:46.913 "null", 00:19:46.913 "ffdhe2048", 00:19:46.913 "ffdhe3072", 00:19:46.913 "ffdhe4096", 00:19:46.913 "ffdhe6144", 00:19:46.913 "ffdhe8192" 00:19:46.913 ] 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "bdev_nvme_set_hotplug", 00:19:46.913 "params": { 00:19:46.913 "period_us": 100000, 00:19:46.913 "enable": false 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "bdev_malloc_create", 00:19:46.913 "params": { 00:19:46.913 "name": "malloc0", 00:19:46.913 "num_blocks": 8192, 00:19:46.913 "block_size": 4096, 00:19:46.913 "physical_block_size": 4096, 00:19:46.913 "uuid": "f9f097ae-12a5-416f-8fb3-5dc3b3dfdb8a", 00:19:46.913 "optimal_io_boundary": 0, 00:19:46.913 "md_size": 0, 00:19:46.913 "dif_type": 0, 00:19:46.913 "dif_is_head_of_md": false, 00:19:46.913 "dif_pi_format": 0 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "bdev_wait_for_examine" 00:19:46.913 } 00:19:46.913 ] 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "subsystem": "nbd", 00:19:46.913 "config": [] 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "subsystem": "scheduler", 00:19:46.913 "config": [ 00:19:46.913 { 00:19:46.913 "method": "framework_set_scheduler", 00:19:46.913 "params": { 00:19:46.913 "name": "static" 00:19:46.913 } 00:19:46.913 } 00:19:46.913 ] 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "subsystem": "nvmf", 00:19:46.913 "config": [ 00:19:46.913 { 00:19:46.913 "method": "nvmf_set_config", 00:19:46.913 "params": { 00:19:46.913 "discovery_filter": "match_any", 00:19:46.913 "admin_cmd_passthru": { 00:19:46.913 "identify_ctrlr": false 00:19:46.913 }, 00:19:46.913 "dhchap_digests": [ 00:19:46.913 "sha256", 00:19:46.913 "sha384", 00:19:46.913 "sha512" 00:19:46.913 ], 00:19:46.913 "dhchap_dhgroups": [ 00:19:46.913 "null", 00:19:46.913 "ffdhe2048", 00:19:46.913 "ffdhe3072", 00:19:46.913 "ffdhe4096", 00:19:46.913 "ffdhe6144", 00:19:46.913 "ffdhe8192" 00:19:46.913 ] 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "nvmf_set_max_subsystems", 00:19:46.913 "params": { 00:19:46.913 "max_subsystems": 1024 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "nvmf_set_crdt", 00:19:46.913 "params": { 00:19:46.913 "crdt1": 0, 00:19:46.913 "crdt2": 0, 00:19:46.913 "crdt3": 0 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "nvmf_create_transport", 00:19:46.913 "params": { 00:19:46.913 "trtype": "TCP", 00:19:46.913 "max_queue_depth": 128, 00:19:46.913 "max_io_qpairs_per_ctrlr": 127, 00:19:46.913 "in_capsule_data_size": 4096, 00:19:46.913 "max_io_size": 131072, 00:19:46.913 "io_unit_size": 131072, 00:19:46.913 "max_aq_depth": 128, 00:19:46.913 "num_shared_buffers": 511, 00:19:46.913 "buf_cache_size": 4294967295, 00:19:46.913 "dif_insert_or_strip": false, 00:19:46.913 "zcopy": false, 00:19:46.913 "c2h_success": false, 00:19:46.913 "sock_priority": 0, 00:19:46.913 "abort_timeout_sec": 1, 00:19:46.913 "ack_timeout": 0, 00:19:46.913 "data_wr_pool_size": 0 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "nvmf_create_subsystem", 00:19:46.913 "params": { 00:19:46.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.913 "allow_any_host": false, 00:19:46.913 "serial_number": "SPDK00000000000001", 00:19:46.913 "model_number": "SPDK bdev Controller", 00:19:46.913 "max_namespaces": 10, 00:19:46.913 "min_cntlid": 1, 00:19:46.913 "max_cntlid": 65519, 00:19:46.913 "ana_reporting": false 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "nvmf_subsystem_add_host", 00:19:46.913 "params": { 00:19:46.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.913 "host": "nqn.2016-06.io.spdk:host1", 00:19:46.913 "psk": "key0" 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "nvmf_subsystem_add_ns", 00:19:46.913 "params": { 00:19:46.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.913 "namespace": { 00:19:46.913 "nsid": 1, 00:19:46.913 "bdev_name": "malloc0", 00:19:46.913 "nguid": "F9F097AE12A5416F8FB35DC3B3DFDB8A", 00:19:46.913 "uuid": "f9f097ae-12a5-416f-8fb3-5dc3b3dfdb8a", 00:19:46.913 "no_auto_visible": false 00:19:46.913 } 00:19:46.913 } 00:19:46.913 }, 00:19:46.913 { 00:19:46.913 "method": "nvmf_subsystem_add_listener", 00:19:46.913 "params": { 00:19:46.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.913 "listen_address": { 00:19:46.913 "trtype": "TCP", 00:19:46.913 "adrfam": "IPv4", 00:19:46.913 "traddr": "10.0.0.2", 00:19:46.913 "trsvcid": "4420" 00:19:46.913 }, 00:19:46.913 "secure_channel": true 00:19:46.913 } 00:19:46.913 } 00:19:46.913 ] 00:19:46.913 } 00:19:46.913 ] 00:19:46.913 }' 00:19:46.913 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.913 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:46.913 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1383409 00:19:46.913 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1383409 00:19:46.914 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1383409 ']' 00:19:46.914 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.914 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.914 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.914 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.914 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.914 [2024-11-28 08:18:29.127715] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:46.914 [2024-11-28 08:18:29.127759] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.173 [2024-11-28 08:18:29.192694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.173 [2024-11-28 08:18:29.233815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.173 [2024-11-28 08:18:29.233848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.173 [2024-11-28 08:18:29.233856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.173 [2024-11-28 08:18:29.233862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.173 [2024-11-28 08:18:29.233867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.173 [2024-11-28 08:18:29.234473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.433 [2024-11-28 08:18:29.448560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.433 [2024-11-28 08:18:29.480588] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.433 [2024-11-28 08:18:29.480786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.002 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.002 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.002 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.002 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.002 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.002 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.002 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1383450 00:19:48.002 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1383450 /var/tmp/bdevperf.sock 00:19:48.002 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1383450 ']' 00:19:48.002 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.002 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:48.002 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.002 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.002 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:48.002 "subsystems": [ 00:19:48.002 { 00:19:48.002 "subsystem": "keyring", 00:19:48.002 "config": [ 00:19:48.002 { 00:19:48.002 "method": "keyring_file_add_key", 00:19:48.002 "params": { 00:19:48.002 "name": "key0", 00:19:48.003 "path": "/tmp/tmp.FsRSIrfvHn" 00:19:48.003 } 00:19:48.003 } 00:19:48.003 ] 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "subsystem": "iobuf", 00:19:48.003 "config": [ 00:19:48.003 { 00:19:48.003 "method": "iobuf_set_options", 00:19:48.003 "params": { 00:19:48.003 "small_pool_count": 8192, 00:19:48.003 "large_pool_count": 1024, 00:19:48.003 "small_bufsize": 8192, 00:19:48.003 "large_bufsize": 135168, 00:19:48.003 "enable_numa": false 00:19:48.003 } 00:19:48.003 } 00:19:48.003 ] 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "subsystem": "sock", 00:19:48.003 "config": [ 00:19:48.003 { 00:19:48.003 "method": "sock_set_default_impl", 00:19:48.003 "params": { 00:19:48.003 "impl_name": "posix" 00:19:48.003 } 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "method": "sock_impl_set_options", 00:19:48.003 "params": { 00:19:48.003 "impl_name": "ssl", 00:19:48.003 "recv_buf_size": 4096, 00:19:48.003 "send_buf_size": 4096, 00:19:48.003 "enable_recv_pipe": true, 00:19:48.003 "enable_quickack": false, 00:19:48.003 "enable_placement_id": 0, 00:19:48.003 "enable_zerocopy_send_server": true, 00:19:48.003 "enable_zerocopy_send_client": false, 00:19:48.003 "zerocopy_threshold": 0, 00:19:48.003 "tls_version": 0, 00:19:48.003 "enable_ktls": false 00:19:48.003 } 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "method": "sock_impl_set_options", 00:19:48.003 "params": { 00:19:48.003 "impl_name": "posix", 00:19:48.003 "recv_buf_size": 2097152, 00:19:48.003 "send_buf_size": 2097152, 00:19:48.003 "enable_recv_pipe": true, 00:19:48.003 "enable_quickack": false, 00:19:48.003 "enable_placement_id": 0, 00:19:48.003 "enable_zerocopy_send_server": true, 00:19:48.003 "enable_zerocopy_send_client": false, 00:19:48.003 "zerocopy_threshold": 0, 00:19:48.003 "tls_version": 0, 00:19:48.003 "enable_ktls": false 00:19:48.003 } 00:19:48.003 } 00:19:48.003 ] 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "subsystem": "vmd", 00:19:48.003 "config": [] 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "subsystem": "accel", 00:19:48.003 "config": [ 00:19:48.003 { 00:19:48.003 "method": "accel_set_options", 00:19:48.003 "params": { 00:19:48.003 "small_cache_size": 128, 00:19:48.003 "large_cache_size": 16, 00:19:48.003 "task_count": 2048, 00:19:48.003 "sequence_count": 2048, 00:19:48.003 "buf_count": 2048 00:19:48.003 } 00:19:48.003 } 00:19:48.003 ] 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "subsystem": "bdev", 00:19:48.003 "config": [ 00:19:48.003 { 00:19:48.003 "method": "bdev_set_options", 00:19:48.003 "params": { 00:19:48.003 "bdev_io_pool_size": 65535, 00:19:48.003 "bdev_io_cache_size": 256, 00:19:48.003 "bdev_auto_examine": true, 00:19:48.003 "iobuf_small_cache_size": 128, 00:19:48.003 "iobuf_large_cache_size": 16 00:19:48.003 } 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "method": "bdev_raid_set_options", 00:19:48.003 "params": { 00:19:48.003 "process_window_size_kb": 1024, 00:19:48.003 "process_max_bandwidth_mb_sec": 0 00:19:48.003 } 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "method": "bdev_iscsi_set_options", 00:19:48.003 "params": { 00:19:48.003 "timeout_sec": 30 00:19:48.003 } 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "method": "bdev_nvme_set_options", 00:19:48.003 "params": { 00:19:48.003 "action_on_timeout": "none", 00:19:48.003 "timeout_us": 0, 00:19:48.003 "timeout_admin_us": 0, 00:19:48.003 "keep_alive_timeout_ms": 10000, 00:19:48.003 "arbitration_burst": 0, 00:19:48.003 "low_priority_weight": 0, 00:19:48.003 "medium_priority_weight": 0, 00:19:48.003 "high_priority_weight": 0, 00:19:48.003 "nvme_adminq_poll_period_us": 10000, 00:19:48.003 "nvme_ioq_poll_period_us": 0, 00:19:48.003 "io_queue_requests": 512, 00:19:48.003 "delay_cmd_submit": true, 00:19:48.003 "transport_retry_count": 4, 00:19:48.003 "bdev_retry_count": 3, 00:19:48.003 "transport_ack_timeout": 0, 00:19:48.003 "ctrlr_loss_timeout_sec": 0, 00:19:48.003 "reconnect_delay_sec": 0, 00:19:48.003 "fast_io_fail_timeout_sec": 0, 00:19:48.003 "disable_auto_failback": false, 00:19:48.003 "generate_uuids": false, 00:19:48.003 "transport_tos": 0, 00:19:48.003 "nvme_error_stat": false, 00:19:48.003 "rdma_srq_size": 0, 00:19:48.003 "io_path_stat": false, 00:19:48.003 "allow_accel_sequence": false, 00:19:48.003 "rdma_max_cq_size": 0, 00:19:48.003 "rdma_cm_event_timeout_ms": 0, 00:19:48.003 "dhchap_digests": [ 00:19:48.003 "sha256", 00:19:48.003 "sha384", 00:19:48.003 "sha512" 00:19:48.003 ], 00:19:48.003 "dhchap_dhgroups": [ 00:19:48.003 "null", 00:19:48.003 "ffdhe2048", 00:19:48.003 "ffdhe3072", 00:19:48.003 "ffdhe4096", 00:19:48.003 "ffdhe6144", 00:19:48.003 "ffdhe8192" 00:19:48.003 ] 00:19:48.003 } 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "method": "bdev_nvme_attach_controller", 00:19:48.003 "params": { 00:19:48.003 "name": "TLSTEST", 00:19:48.003 "trtype": "TCP", 00:19:48.003 "adrfam": "IPv4", 00:19:48.003 "traddr": "10.0.0.2", 00:19:48.003 "trsvcid": "4420", 00:19:48.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.003 "prchk_reftag": false, 00:19:48.003 "prchk_guard": false, 00:19:48.003 "ctrlr_loss_timeout_sec": 0, 00:19:48.003 "reconnect_delay_sec": 0, 00:19:48.003 "fast_io_fail_timeout_sec": 0, 00:19:48.003 "psk": "key0", 00:19:48.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.003 "hdgst": false, 00:19:48.003 "ddgst": false, 00:19:48.003 "multipath": "multipath" 00:19:48.003 } 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "method": "bdev_nvme_set_hotplug", 00:19:48.003 "params": { 00:19:48.003 "period_us": 100000, 00:19:48.003 "enable": false 00:19:48.003 } 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "method": "bdev_wait_for_examine" 00:19:48.003 } 00:19:48.003 ] 00:19:48.003 }, 00:19:48.003 { 00:19:48.003 "subsystem": "nbd", 00:19:48.003 "config": [] 00:19:48.003 } 00:19:48.003 ] 00:19:48.003 }' 00:19:48.003 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.003 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.003 [2024-11-28 08:18:30.056163] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:48.003 [2024-11-28 08:18:30.056218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383450 ] 00:19:48.003 [2024-11-28 08:18:30.116497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.003 [2024-11-28 08:18:30.157377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.263 [2024-11-28 08:18:30.310921] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.830 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.830 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.830 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:48.830 Running I/O for 10 seconds... 00:19:51.144 5399.00 IOPS, 21.09 MiB/s [2024-11-28T07:18:33.980Z] 5467.00 IOPS, 21.36 MiB/s [2024-11-28T07:18:35.358Z] 5452.33 IOPS, 21.30 MiB/s [2024-11-28T07:18:36.295Z] 5460.00 IOPS, 21.33 MiB/s [2024-11-28T07:18:37.231Z] 5471.60 IOPS, 21.37 MiB/s [2024-11-28T07:18:38.168Z] 5449.00 IOPS, 21.29 MiB/s [2024-11-28T07:18:39.106Z] 5469.43 IOPS, 21.36 MiB/s [2024-11-28T07:18:40.043Z] 5468.00 IOPS, 21.36 MiB/s [2024-11-28T07:18:41.424Z] 5488.33 IOPS, 21.44 MiB/s [2024-11-28T07:18:41.424Z] 5429.90 IOPS, 21.21 MiB/s 00:19:59.155 Latency(us) 00:19:59.155 [2024-11-28T07:18:41.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.155 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.155 Verification LBA range: start 0x0 length 0x2000 00:19:59.155 TLSTESTn1 : 10.01 5435.19 21.23 0.00 0.00 23514.85 5584.81 22795.13 00:19:59.155 [2024-11-28T07:18:41.424Z] =================================================================================================================== 00:19:59.155 [2024-11-28T07:18:41.424Z] Total : 5435.19 21.23 0.00 0.00 23514.85 5584.81 22795.13 00:19:59.155 { 00:19:59.155 "results": [ 00:19:59.155 { 00:19:59.155 "job": "TLSTESTn1", 00:19:59.155 "core_mask": "0x4", 00:19:59.155 "workload": "verify", 00:19:59.155 "status": "finished", 00:19:59.155 "verify_range": { 00:19:59.155 "start": 0, 00:19:59.155 "length": 8192 00:19:59.155 }, 00:19:59.155 "queue_depth": 128, 00:19:59.155 "io_size": 4096, 00:19:59.155 "runtime": 10.013447, 00:19:59.155 "iops": 5435.191298261228, 00:19:59.155 "mibps": 21.231216008832924, 00:19:59.155 "io_failed": 0, 00:19:59.155 "io_timeout": 0, 00:19:59.155 "avg_latency_us": 23514.852909636316, 00:19:59.155 "min_latency_us": 5584.806956521739, 00:19:59.155 "max_latency_us": 22795.130434782608 00:19:59.155 } 00:19:59.155 ], 00:19:59.155 "core_count": 1 00:19:59.155 } 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1383450 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1383450 ']' 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1383450 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383450 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383450' 00:19:59.155 killing process with pid 1383450 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1383450 00:19:59.155 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.155 00:19:59.155 Latency(us) 00:19:59.155 [2024-11-28T07:18:41.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.155 [2024-11-28T07:18:41.424Z] =================================================================================================================== 00:19:59.155 [2024-11-28T07:18:41.424Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1383450 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1383409 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1383409 ']' 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1383409 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383409 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383409' 00:19:59.155 killing process with pid 1383409 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1383409 00:19:59.155 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1383409 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1385367 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1385367 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1385367 ']' 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.414 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.414 [2024-11-28 08:18:41.513058] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:19:59.414 [2024-11-28 08:18:41.513109] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.414 [2024-11-28 08:18:41.580630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.414 [2024-11-28 08:18:41.622682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.414 [2024-11-28 08:18:41.622715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.414 [2024-11-28 08:18:41.622726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.414 [2024-11-28 08:18:41.622732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.414 [2024-11-28 08:18:41.622738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.414 [2024-11-28 08:18:41.623206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.673 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.673 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:59.673 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.673 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:59.673 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.673 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.673 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.FsRSIrfvHn 00:19:59.673 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FsRSIrfvHn 00:19:59.673 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:59.673 [2024-11-28 08:18:41.921403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.932 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:59.932 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:00.191 [2024-11-28 08:18:42.294355] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.191 [2024-11-28 08:18:42.294547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.191 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:00.450 malloc0 00:20:00.450 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:00.450 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FsRSIrfvHn 00:20:00.708 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:00.967 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:00.967 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1385756 00:20:00.967 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:00.967 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1385756 /var/tmp/bdevperf.sock 00:20:00.967 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1385756 ']' 00:20:00.967 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.967 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.967 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.967 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.967 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.967 [2024-11-28 08:18:43.096127] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:20:00.967 [2024-11-28 08:18:43.096177] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385756 ] 00:20:00.967 [2024-11-28 08:18:43.161324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.967 [2024-11-28 08:18:43.202212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.226 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.226 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.226 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FsRSIrfvHn 00:20:01.226 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:01.485 [2024-11-28 08:18:43.635219] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.485 nvme0n1 00:20:01.485 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.744 Running I/O for 1 seconds... 00:20:02.681 5168.00 IOPS, 20.19 MiB/s 00:20:02.681 Latency(us) 00:20:02.681 [2024-11-28T07:18:44.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.681 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:02.681 Verification LBA range: start 0x0 length 0x2000 00:20:02.681 nvme0n1 : 1.02 5203.25 20.33 0.00 0.00 24390.62 7180.47 32141.13 00:20:02.681 [2024-11-28T07:18:44.950Z] =================================================================================================================== 00:20:02.681 [2024-11-28T07:18:44.950Z] Total : 5203.25 20.33 0.00 0.00 24390.62 7180.47 32141.13 00:20:02.681 { 00:20:02.681 "results": [ 00:20:02.681 { 00:20:02.681 "job": "nvme0n1", 00:20:02.681 "core_mask": "0x2", 00:20:02.681 "workload": "verify", 00:20:02.681 "status": "finished", 00:20:02.681 "verify_range": { 00:20:02.681 "start": 0, 00:20:02.681 "length": 8192 00:20:02.681 }, 00:20:02.681 "queue_depth": 128, 00:20:02.681 "io_size": 4096, 00:20:02.681 "runtime": 1.017826, 00:20:02.681 "iops": 5203.246920397003, 00:20:02.681 "mibps": 20.325183282800793, 00:20:02.681 "io_failed": 0, 00:20:02.681 "io_timeout": 0, 00:20:02.681 "avg_latency_us": 24390.622776829106, 00:20:02.681 "min_latency_us": 7180.466086956521, 00:20:02.681 "max_latency_us": 32141.13391304348 00:20:02.681 } 00:20:02.681 ], 00:20:02.681 "core_count": 1 00:20:02.681 } 00:20:02.681 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1385756 00:20:02.681 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1385756 ']' 00:20:02.681 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1385756 00:20:02.681 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:02.682 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.682 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1385756 00:20:02.682 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:02.682 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:02.682 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1385756' 00:20:02.682 killing process with pid 1385756 00:20:02.682 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1385756 00:20:02.682 Received shutdown signal, test time was about 1.000000 seconds 00:20:02.682 00:20:02.682 Latency(us) 00:20:02.682 [2024-11-28T07:18:44.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.682 [2024-11-28T07:18:44.951Z] =================================================================================================================== 00:20:02.682 [2024-11-28T07:18:44.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.682 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1385756 00:20:02.941 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1385367 00:20:02.941 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1385367 ']' 00:20:02.941 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1385367 00:20:02.941 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:02.941 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.941 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1385367 00:20:02.941 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.941 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.941 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1385367' 00:20:02.941 killing process with pid 1385367 00:20:02.941 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1385367 00:20:02.941 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1385367 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1386009 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1386009 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1386009 ']' 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.200 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.200 [2024-11-28 08:18:45.301083] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:20:03.200 [2024-11-28 08:18:45.301130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.200 [2024-11-28 08:18:45.366400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.200 [2024-11-28 08:18:45.402104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.200 [2024-11-28 08:18:45.402144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.200 [2024-11-28 08:18:45.402150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.200 [2024-11-28 08:18:45.402156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.200 [2024-11-28 08:18:45.402161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.200 [2024-11-28 08:18:45.402719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.459 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.459 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:03.459 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:03.459 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:03.459 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.459 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.459 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:03.459 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.459 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.459 [2024-11-28 08:18:45.531117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.459 malloc0 00:20:03.459 [2024-11-28 08:18:45.559222] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.460 [2024-11-28 08:18:45.559429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.460 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.460 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1386081 00:20:03.460 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1386081 /var/tmp/bdevperf.sock 00:20:03.460 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:03.460 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1386081 ']' 00:20:03.460 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.460 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.460 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.460 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.460 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.460 [2024-11-28 08:18:45.634521] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:20:03.460 [2024-11-28 08:18:45.634563] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386081 ] 00:20:03.460 [2024-11-28 08:18:45.695689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.719 [2024-11-28 08:18:45.739419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.719 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.719 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:03.719 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FsRSIrfvHn 00:20:03.978 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:03.978 [2024-11-28 08:18:46.181230] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.237 nvme0n1 00:20:04.237 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:04.237 Running I/O for 1 seconds... 00:20:05.175 4793.00 IOPS, 18.72 MiB/s 00:20:05.175 Latency(us) 00:20:05.175 [2024-11-28T07:18:47.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.175 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:05.175 Verification LBA range: start 0x0 length 0x2000 00:20:05.175 nvme0n1 : 1.02 4832.85 18.88 0.00 0.00 26249.77 5584.81 44906.41 00:20:05.175 [2024-11-28T07:18:47.444Z] =================================================================================================================== 00:20:05.175 [2024-11-28T07:18:47.444Z] Total : 4832.85 18.88 0.00 0.00 26249.77 5584.81 44906.41 00:20:05.175 { 00:20:05.175 "results": [ 00:20:05.175 { 00:20:05.175 "job": "nvme0n1", 00:20:05.175 "core_mask": "0x2", 00:20:05.175 "workload": "verify", 00:20:05.175 "status": "finished", 00:20:05.175 "verify_range": { 00:20:05.175 "start": 0, 00:20:05.175 "length": 8192 00:20:05.175 }, 00:20:05.175 "queue_depth": 128, 00:20:05.175 "io_size": 4096, 00:20:05.175 "runtime": 1.018239, 00:20:05.175 "iops": 4832.8535834907125, 00:20:05.175 "mibps": 18.878334310510596, 00:20:05.175 "io_failed": 0, 00:20:05.175 "io_timeout": 0, 00:20:05.175 "avg_latency_us": 26249.772752445155, 00:20:05.175 "min_latency_us": 5584.806956521739, 00:20:05.175 "max_latency_us": 44906.406956521736 00:20:05.175 } 00:20:05.175 ], 00:20:05.175 "core_count": 1 00:20:05.175 } 00:20:05.175 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:05.175 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.175 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.434 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.434 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:05.434 "subsystems": [ 00:20:05.434 { 00:20:05.434 "subsystem": "keyring", 00:20:05.434 "config": [ 00:20:05.434 { 00:20:05.434 "method": "keyring_file_add_key", 00:20:05.434 "params": { 00:20:05.434 "name": "key0", 00:20:05.434 "path": "/tmp/tmp.FsRSIrfvHn" 00:20:05.434 } 00:20:05.434 } 00:20:05.434 ] 00:20:05.434 }, 00:20:05.434 { 00:20:05.434 "subsystem": "iobuf", 00:20:05.434 "config": [ 00:20:05.434 { 00:20:05.434 "method": "iobuf_set_options", 00:20:05.434 "params": { 00:20:05.434 "small_pool_count": 8192, 00:20:05.434 "large_pool_count": 1024, 00:20:05.434 "small_bufsize": 8192, 00:20:05.434 "large_bufsize": 135168, 00:20:05.434 "enable_numa": false 00:20:05.434 } 00:20:05.434 } 00:20:05.434 ] 00:20:05.434 }, 00:20:05.434 { 00:20:05.434 "subsystem": "sock", 00:20:05.434 "config": [ 00:20:05.434 { 00:20:05.434 "method": "sock_set_default_impl", 00:20:05.434 "params": { 00:20:05.434 "impl_name": "posix" 00:20:05.434 } 00:20:05.434 }, 00:20:05.434 { 00:20:05.434 "method": "sock_impl_set_options", 00:20:05.434 "params": { 00:20:05.434 "impl_name": "ssl", 00:20:05.434 "recv_buf_size": 4096, 00:20:05.434 "send_buf_size": 4096, 00:20:05.434 "enable_recv_pipe": true, 00:20:05.434 "enable_quickack": false, 00:20:05.434 "enable_placement_id": 0, 00:20:05.434 "enable_zerocopy_send_server": true, 00:20:05.434 "enable_zerocopy_send_client": false, 00:20:05.434 "zerocopy_threshold": 0, 00:20:05.434 "tls_version": 0, 00:20:05.434 "enable_ktls": false 00:20:05.434 } 00:20:05.434 }, 00:20:05.434 { 00:20:05.434 "method": "sock_impl_set_options", 00:20:05.434 "params": { 00:20:05.434 "impl_name": "posix", 00:20:05.434 "recv_buf_size": 2097152, 00:20:05.434 "send_buf_size": 2097152, 00:20:05.434 "enable_recv_pipe": true, 00:20:05.434 "enable_quickack": false, 00:20:05.434 "enable_placement_id": 0, 00:20:05.434 "enable_zerocopy_send_server": true, 00:20:05.434 "enable_zerocopy_send_client": false, 00:20:05.434 "zerocopy_threshold": 0, 00:20:05.434 "tls_version": 0, 00:20:05.434 "enable_ktls": false 00:20:05.434 } 00:20:05.434 } 00:20:05.434 ] 00:20:05.434 }, 00:20:05.434 { 00:20:05.434 "subsystem": "vmd", 00:20:05.434 "config": [] 00:20:05.434 }, 00:20:05.434 { 00:20:05.434 "subsystem": "accel", 00:20:05.434 "config": [ 00:20:05.434 { 00:20:05.434 "method": "accel_set_options", 00:20:05.434 "params": { 00:20:05.434 "small_cache_size": 128, 00:20:05.434 "large_cache_size": 16, 00:20:05.434 "task_count": 2048, 00:20:05.434 "sequence_count": 2048, 00:20:05.434 "buf_count": 2048 00:20:05.435 } 00:20:05.435 } 00:20:05.435 ] 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "subsystem": "bdev", 00:20:05.435 "config": [ 00:20:05.435 { 00:20:05.435 "method": "bdev_set_options", 00:20:05.435 "params": { 00:20:05.435 "bdev_io_pool_size": 65535, 00:20:05.435 "bdev_io_cache_size": 256, 00:20:05.435 "bdev_auto_examine": true, 00:20:05.435 "iobuf_small_cache_size": 128, 00:20:05.435 "iobuf_large_cache_size": 16 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "bdev_raid_set_options", 00:20:05.435 "params": { 00:20:05.435 "process_window_size_kb": 1024, 00:20:05.435 "process_max_bandwidth_mb_sec": 0 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "bdev_iscsi_set_options", 00:20:05.435 "params": { 00:20:05.435 "timeout_sec": 30 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "bdev_nvme_set_options", 00:20:05.435 "params": { 00:20:05.435 "action_on_timeout": "none", 00:20:05.435 "timeout_us": 0, 00:20:05.435 "timeout_admin_us": 0, 00:20:05.435 "keep_alive_timeout_ms": 10000, 00:20:05.435 "arbitration_burst": 0, 00:20:05.435 "low_priority_weight": 0, 00:20:05.435 "medium_priority_weight": 0, 00:20:05.435 "high_priority_weight": 0, 00:20:05.435 "nvme_adminq_poll_period_us": 10000, 00:20:05.435 "nvme_ioq_poll_period_us": 0, 00:20:05.435 "io_queue_requests": 0, 00:20:05.435 "delay_cmd_submit": true, 00:20:05.435 "transport_retry_count": 4, 00:20:05.435 "bdev_retry_count": 3, 00:20:05.435 "transport_ack_timeout": 0, 00:20:05.435 "ctrlr_loss_timeout_sec": 0, 00:20:05.435 "reconnect_delay_sec": 0, 00:20:05.435 "fast_io_fail_timeout_sec": 0, 00:20:05.435 "disable_auto_failback": false, 00:20:05.435 "generate_uuids": false, 00:20:05.435 "transport_tos": 0, 00:20:05.435 "nvme_error_stat": false, 00:20:05.435 "rdma_srq_size": 0, 00:20:05.435 "io_path_stat": false, 00:20:05.435 "allow_accel_sequence": false, 00:20:05.435 "rdma_max_cq_size": 0, 00:20:05.435 "rdma_cm_event_timeout_ms": 0, 00:20:05.435 "dhchap_digests": [ 00:20:05.435 "sha256", 00:20:05.435 "sha384", 00:20:05.435 "sha512" 00:20:05.435 ], 00:20:05.435 "dhchap_dhgroups": [ 00:20:05.435 "null", 00:20:05.435 "ffdhe2048", 00:20:05.435 "ffdhe3072", 00:20:05.435 "ffdhe4096", 00:20:05.435 "ffdhe6144", 00:20:05.435 "ffdhe8192" 00:20:05.435 ] 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "bdev_nvme_set_hotplug", 00:20:05.435 "params": { 00:20:05.435 "period_us": 100000, 00:20:05.435 "enable": false 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "bdev_malloc_create", 00:20:05.435 "params": { 00:20:05.435 "name": "malloc0", 00:20:05.435 "num_blocks": 8192, 00:20:05.435 "block_size": 4096, 00:20:05.435 "physical_block_size": 4096, 00:20:05.435 "uuid": "8852c835-9fcb-40cc-9cd4-8dafeed9e0e7", 00:20:05.435 "optimal_io_boundary": 0, 00:20:05.435 "md_size": 0, 00:20:05.435 "dif_type": 0, 00:20:05.435 "dif_is_head_of_md": false, 00:20:05.435 "dif_pi_format": 0 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "bdev_wait_for_examine" 00:20:05.435 } 00:20:05.435 ] 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "subsystem": "nbd", 00:20:05.435 "config": [] 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "subsystem": "scheduler", 00:20:05.435 "config": [ 00:20:05.435 { 00:20:05.435 "method": "framework_set_scheduler", 00:20:05.435 "params": { 00:20:05.435 "name": "static" 00:20:05.435 } 00:20:05.435 } 00:20:05.435 ] 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "subsystem": "nvmf", 00:20:05.435 "config": [ 00:20:05.435 { 00:20:05.435 "method": "nvmf_set_config", 00:20:05.435 "params": { 00:20:05.435 "discovery_filter": "match_any", 00:20:05.435 "admin_cmd_passthru": { 00:20:05.435 "identify_ctrlr": false 00:20:05.435 }, 00:20:05.435 "dhchap_digests": [ 00:20:05.435 "sha256", 00:20:05.435 "sha384", 00:20:05.435 "sha512" 00:20:05.435 ], 00:20:05.435 "dhchap_dhgroups": [ 00:20:05.435 "null", 00:20:05.435 "ffdhe2048", 00:20:05.435 "ffdhe3072", 00:20:05.435 "ffdhe4096", 00:20:05.435 "ffdhe6144", 00:20:05.435 "ffdhe8192" 00:20:05.435 ] 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "nvmf_set_max_subsystems", 00:20:05.435 "params": { 00:20:05.435 "max_subsystems": 1024 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "nvmf_set_crdt", 00:20:05.435 "params": { 00:20:05.435 "crdt1": 0, 00:20:05.435 "crdt2": 0, 00:20:05.435 "crdt3": 0 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "nvmf_create_transport", 00:20:05.435 "params": { 00:20:05.435 "trtype": "TCP", 00:20:05.435 "max_queue_depth": 128, 00:20:05.435 "max_io_qpairs_per_ctrlr": 127, 00:20:05.435 "in_capsule_data_size": 4096, 00:20:05.435 "max_io_size": 131072, 00:20:05.435 "io_unit_size": 131072, 00:20:05.435 "max_aq_depth": 128, 00:20:05.435 "num_shared_buffers": 511, 00:20:05.435 "buf_cache_size": 4294967295, 00:20:05.435 "dif_insert_or_strip": false, 00:20:05.435 "zcopy": false, 00:20:05.435 "c2h_success": false, 00:20:05.435 "sock_priority": 0, 00:20:05.435 "abort_timeout_sec": 1, 00:20:05.435 "ack_timeout": 0, 00:20:05.435 "data_wr_pool_size": 0 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "nvmf_create_subsystem", 00:20:05.435 "params": { 00:20:05.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.435 "allow_any_host": false, 00:20:05.435 "serial_number": "00000000000000000000", 00:20:05.435 "model_number": "SPDK bdev Controller", 00:20:05.435 "max_namespaces": 32, 00:20:05.435 "min_cntlid": 1, 00:20:05.435 "max_cntlid": 65519, 00:20:05.435 "ana_reporting": false 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "nvmf_subsystem_add_host", 00:20:05.435 "params": { 00:20:05.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.435 "host": "nqn.2016-06.io.spdk:host1", 00:20:05.435 "psk": "key0" 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "nvmf_subsystem_add_ns", 00:20:05.435 "params": { 00:20:05.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.435 "namespace": { 00:20:05.435 "nsid": 1, 00:20:05.435 "bdev_name": "malloc0", 00:20:05.435 "nguid": "8852C8359FCB40CC9CD48DAFEED9E0E7", 00:20:05.435 "uuid": "8852c835-9fcb-40cc-9cd4-8dafeed9e0e7", 00:20:05.435 "no_auto_visible": false 00:20:05.435 } 00:20:05.435 } 00:20:05.435 }, 00:20:05.435 { 00:20:05.435 "method": "nvmf_subsystem_add_listener", 00:20:05.435 "params": { 00:20:05.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.435 "listen_address": { 00:20:05.435 "trtype": "TCP", 00:20:05.435 "adrfam": "IPv4", 00:20:05.435 "traddr": "10.0.0.2", 00:20:05.435 "trsvcid": "4420" 00:20:05.435 }, 00:20:05.435 "secure_channel": false, 00:20:05.435 "sock_impl": "ssl" 00:20:05.435 } 00:20:05.435 } 00:20:05.435 ] 00:20:05.435 } 00:20:05.435 ] 00:20:05.435 }' 00:20:05.435 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:05.695 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:05.695 "subsystems": [ 00:20:05.695 { 00:20:05.695 "subsystem": "keyring", 00:20:05.695 "config": [ 00:20:05.695 { 00:20:05.695 "method": "keyring_file_add_key", 00:20:05.695 "params": { 00:20:05.695 "name": "key0", 00:20:05.695 "path": "/tmp/tmp.FsRSIrfvHn" 00:20:05.695 } 00:20:05.695 } 00:20:05.695 ] 00:20:05.695 }, 00:20:05.695 { 00:20:05.695 "subsystem": "iobuf", 00:20:05.695 "config": [ 00:20:05.695 { 00:20:05.695 "method": "iobuf_set_options", 00:20:05.695 "params": { 00:20:05.695 "small_pool_count": 8192, 00:20:05.695 "large_pool_count": 1024, 00:20:05.695 "small_bufsize": 8192, 00:20:05.695 "large_bufsize": 135168, 00:20:05.695 "enable_numa": false 00:20:05.695 } 00:20:05.695 } 00:20:05.695 ] 00:20:05.695 }, 00:20:05.695 { 00:20:05.695 "subsystem": "sock", 00:20:05.695 "config": [ 00:20:05.695 { 00:20:05.695 "method": "sock_set_default_impl", 00:20:05.695 "params": { 00:20:05.695 "impl_name": "posix" 00:20:05.695 } 00:20:05.695 }, 00:20:05.695 { 00:20:05.695 "method": "sock_impl_set_options", 00:20:05.695 "params": { 00:20:05.695 "impl_name": "ssl", 00:20:05.695 "recv_buf_size": 4096, 00:20:05.695 "send_buf_size": 4096, 00:20:05.695 "enable_recv_pipe": true, 00:20:05.695 "enable_quickack": false, 00:20:05.695 "enable_placement_id": 0, 00:20:05.695 "enable_zerocopy_send_server": true, 00:20:05.695 "enable_zerocopy_send_client": false, 00:20:05.695 "zerocopy_threshold": 0, 00:20:05.695 "tls_version": 0, 00:20:05.695 "enable_ktls": false 00:20:05.695 } 00:20:05.695 }, 00:20:05.695 { 00:20:05.695 "method": "sock_impl_set_options", 00:20:05.695 "params": { 00:20:05.695 "impl_name": "posix", 00:20:05.695 "recv_buf_size": 2097152, 00:20:05.695 "send_buf_size": 2097152, 00:20:05.695 "enable_recv_pipe": true, 00:20:05.695 "enable_quickack": false, 00:20:05.695 "enable_placement_id": 0, 00:20:05.695 "enable_zerocopy_send_server": true, 00:20:05.695 "enable_zerocopy_send_client": false, 00:20:05.695 "zerocopy_threshold": 0, 00:20:05.695 "tls_version": 0, 00:20:05.695 "enable_ktls": false 00:20:05.695 } 00:20:05.695 } 00:20:05.695 ] 00:20:05.695 }, 00:20:05.695 { 00:20:05.695 "subsystem": "vmd", 00:20:05.695 "config": [] 00:20:05.695 }, 00:20:05.695 { 00:20:05.695 "subsystem": "accel", 00:20:05.695 "config": [ 00:20:05.695 { 00:20:05.695 "method": "accel_set_options", 00:20:05.695 "params": { 00:20:05.695 "small_cache_size": 128, 00:20:05.695 "large_cache_size": 16, 00:20:05.695 "task_count": 2048, 00:20:05.695 "sequence_count": 2048, 00:20:05.695 "buf_count": 2048 00:20:05.695 } 00:20:05.695 } 00:20:05.695 ] 00:20:05.695 }, 00:20:05.695 { 00:20:05.695 "subsystem": "bdev", 00:20:05.695 "config": [ 00:20:05.695 { 00:20:05.695 "method": "bdev_set_options", 00:20:05.695 "params": { 00:20:05.695 "bdev_io_pool_size": 65535, 00:20:05.695 "bdev_io_cache_size": 256, 00:20:05.695 "bdev_auto_examine": true, 00:20:05.695 "iobuf_small_cache_size": 128, 00:20:05.695 "iobuf_large_cache_size": 16 00:20:05.695 } 00:20:05.695 }, 00:20:05.695 { 00:20:05.695 "method": "bdev_raid_set_options", 00:20:05.695 "params": { 00:20:05.695 "process_window_size_kb": 1024, 00:20:05.695 "process_max_bandwidth_mb_sec": 0 00:20:05.695 } 00:20:05.695 }, 00:20:05.695 { 00:20:05.695 "method": "bdev_iscsi_set_options", 00:20:05.695 "params": { 00:20:05.695 "timeout_sec": 30 00:20:05.695 } 00:20:05.695 }, 00:20:05.695 { 00:20:05.695 "method": "bdev_nvme_set_options", 00:20:05.695 "params": { 00:20:05.695 "action_on_timeout": "none", 00:20:05.695 "timeout_us": 0, 00:20:05.695 "timeout_admin_us": 0, 00:20:05.695 "keep_alive_timeout_ms": 10000, 00:20:05.695 "arbitration_burst": 0, 00:20:05.695 "low_priority_weight": 0, 00:20:05.695 "medium_priority_weight": 0, 00:20:05.695 "high_priority_weight": 0, 00:20:05.695 "nvme_adminq_poll_period_us": 10000, 00:20:05.695 "nvme_ioq_poll_period_us": 0, 00:20:05.695 "io_queue_requests": 512, 00:20:05.695 "delay_cmd_submit": true, 00:20:05.695 "transport_retry_count": 4, 00:20:05.695 "bdev_retry_count": 3, 00:20:05.695 "transport_ack_timeout": 0, 00:20:05.695 "ctrlr_loss_timeout_sec": 0, 00:20:05.695 "reconnect_delay_sec": 0, 00:20:05.695 "fast_io_fail_timeout_sec": 0, 00:20:05.695 "disable_auto_failback": false, 00:20:05.696 "generate_uuids": false, 00:20:05.696 "transport_tos": 0, 00:20:05.696 "nvme_error_stat": false, 00:20:05.696 "rdma_srq_size": 0, 00:20:05.696 "io_path_stat": false, 00:20:05.696 "allow_accel_sequence": false, 00:20:05.696 "rdma_max_cq_size": 0, 00:20:05.696 "rdma_cm_event_timeout_ms": 0, 00:20:05.696 "dhchap_digests": [ 00:20:05.696 "sha256", 00:20:05.696 "sha384", 00:20:05.696 "sha512" 00:20:05.696 ], 00:20:05.696 "dhchap_dhgroups": [ 00:20:05.696 "null", 00:20:05.696 "ffdhe2048", 00:20:05.696 "ffdhe3072", 00:20:05.696 "ffdhe4096", 00:20:05.696 "ffdhe6144", 00:20:05.696 "ffdhe8192" 00:20:05.696 ] 00:20:05.696 } 00:20:05.696 }, 00:20:05.696 { 00:20:05.696 "method": "bdev_nvme_attach_controller", 00:20:05.696 "params": { 00:20:05.696 "name": "nvme0", 00:20:05.696 "trtype": "TCP", 00:20:05.696 "adrfam": "IPv4", 00:20:05.696 "traddr": "10.0.0.2", 00:20:05.696 "trsvcid": "4420", 00:20:05.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.696 "prchk_reftag": false, 00:20:05.696 "prchk_guard": false, 00:20:05.696 "ctrlr_loss_timeout_sec": 0, 00:20:05.696 "reconnect_delay_sec": 0, 00:20:05.696 "fast_io_fail_timeout_sec": 0, 00:20:05.696 "psk": "key0", 00:20:05.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.696 "hdgst": false, 00:20:05.696 "ddgst": false, 00:20:05.696 "multipath": "multipath" 00:20:05.696 } 00:20:05.696 }, 00:20:05.696 { 00:20:05.696 "method": "bdev_nvme_set_hotplug", 00:20:05.696 "params": { 00:20:05.696 "period_us": 100000, 00:20:05.696 "enable": false 00:20:05.696 } 00:20:05.696 }, 00:20:05.696 { 00:20:05.696 "method": "bdev_enable_histogram", 00:20:05.696 "params": { 00:20:05.696 "name": "nvme0n1", 00:20:05.696 "enable": true 00:20:05.696 } 00:20:05.696 }, 00:20:05.696 { 00:20:05.696 "method": "bdev_wait_for_examine" 00:20:05.696 } 00:20:05.696 ] 00:20:05.696 }, 00:20:05.696 { 00:20:05.696 "subsystem": "nbd", 00:20:05.696 "config": [] 00:20:05.696 } 00:20:05.696 ] 00:20:05.696 }' 00:20:05.696 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1386081 00:20:05.696 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1386081 ']' 00:20:05.696 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1386081 00:20:05.696 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:05.696 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.696 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1386081 00:20:05.696 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:05.696 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:05.696 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1386081' 00:20:05.696 killing process with pid 1386081 00:20:05.696 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1386081 00:20:05.696 Received shutdown signal, test time was about 1.000000 seconds 00:20:05.696 00:20:05.696 Latency(us) 00:20:05.696 [2024-11-28T07:18:47.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.696 [2024-11-28T07:18:47.965Z] =================================================================================================================== 00:20:05.696 [2024-11-28T07:18:47.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.696 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1386081 00:20:05.956 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1386009 00:20:05.956 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1386009 ']' 00:20:05.956 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1386009 00:20:05.956 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:05.956 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.956 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1386009 00:20:05.956 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.956 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.956 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1386009' 00:20:05.956 killing process with pid 1386009 00:20:05.956 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1386009 00:20:05.956 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1386009 00:20:05.956 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:05.956 "subsystems": [ 00:20:05.956 { 00:20:05.956 "subsystem": "keyring", 00:20:05.956 "config": [ 00:20:05.956 { 00:20:05.956 "method": "keyring_file_add_key", 00:20:05.956 "params": { 00:20:05.956 "name": "key0", 00:20:05.956 "path": "/tmp/tmp.FsRSIrfvHn" 00:20:05.956 } 00:20:05.956 } 00:20:05.956 ] 00:20:05.956 }, 00:20:05.956 { 00:20:05.956 "subsystem": "iobuf", 00:20:05.956 "config": [ 00:20:05.956 { 00:20:05.956 "method": "iobuf_set_options", 00:20:05.956 "params": { 00:20:05.956 "small_pool_count": 8192, 00:20:05.956 "large_pool_count": 1024, 00:20:05.956 "small_bufsize": 8192, 00:20:05.956 "large_bufsize": 135168, 00:20:05.956 "enable_numa": false 00:20:05.956 } 00:20:05.956 } 00:20:05.956 ] 00:20:05.956 }, 00:20:05.956 { 00:20:05.956 "subsystem": "sock", 00:20:05.956 "config": [ 00:20:05.956 { 00:20:05.956 "method": "sock_set_default_impl", 00:20:05.956 "params": { 00:20:05.956 "impl_name": "posix" 00:20:05.956 } 00:20:05.956 }, 00:20:05.956 { 00:20:05.956 "method": "sock_impl_set_options", 00:20:05.956 "params": { 00:20:05.956 "impl_name": "ssl", 00:20:05.956 "recv_buf_size": 4096, 00:20:05.956 "send_buf_size": 4096, 00:20:05.956 "enable_recv_pipe": true, 00:20:05.956 "enable_quickack": false, 00:20:05.956 "enable_placement_id": 0, 00:20:05.956 "enable_zerocopy_send_server": true, 00:20:05.956 "enable_zerocopy_send_client": false, 00:20:05.956 "zerocopy_threshold": 0, 00:20:05.956 "tls_version": 0, 00:20:05.956 "enable_ktls": false 00:20:05.956 } 00:20:05.956 }, 00:20:05.956 { 00:20:05.956 "method": "sock_impl_set_options", 00:20:05.956 "params": { 00:20:05.956 "impl_name": "posix", 00:20:05.956 "recv_buf_size": 2097152, 00:20:05.956 "send_buf_size": 2097152, 00:20:05.956 "enable_recv_pipe": true, 00:20:05.956 "enable_quickack": false, 00:20:05.956 "enable_placement_id": 0, 00:20:05.956 "enable_zerocopy_send_server": true, 00:20:05.956 "enable_zerocopy_send_client": false, 00:20:05.956 "zerocopy_threshold": 0, 00:20:05.956 "tls_version": 0, 00:20:05.956 "enable_ktls": false 00:20:05.956 } 00:20:05.956 } 00:20:05.956 ] 00:20:05.956 }, 00:20:05.956 { 00:20:05.956 "subsystem": "vmd", 00:20:05.956 "config": [] 00:20:05.956 }, 00:20:05.956 { 00:20:05.956 "subsystem": "accel", 00:20:05.956 "config": [ 00:20:05.956 { 00:20:05.956 "method": "accel_set_options", 00:20:05.956 "params": { 00:20:05.956 "small_cache_size": 128, 00:20:05.956 "large_cache_size": 16, 00:20:05.956 "task_count": 2048, 00:20:05.956 "sequence_count": 2048, 00:20:05.956 "buf_count": 2048 00:20:05.956 } 00:20:05.956 } 00:20:05.956 ] 00:20:05.956 }, 00:20:05.956 { 00:20:05.956 "subsystem": "bdev", 00:20:05.956 "config": [ 00:20:05.956 { 00:20:05.956 "method": "bdev_set_options", 00:20:05.956 "params": { 00:20:05.956 "bdev_io_pool_size": 65535, 00:20:05.956 "bdev_io_cache_size": 256, 00:20:05.956 "bdev_auto_examine": true, 00:20:05.956 "iobuf_small_cache_size": 128, 00:20:05.956 "iobuf_large_cache_size": 16 00:20:05.956 } 00:20:05.956 }, 00:20:05.956 { 00:20:05.956 "method": "bdev_raid_set_options", 00:20:05.956 "params": { 00:20:05.956 "process_window_size_kb": 1024, 00:20:05.956 "process_max_bandwidth_mb_sec": 0 00:20:05.956 } 00:20:05.956 }, 00:20:05.956 { 00:20:05.956 "method": "bdev_iscsi_set_options", 00:20:05.956 "params": { 00:20:05.956 "timeout_sec": 30 00:20:05.956 } 00:20:05.956 }, 00:20:05.956 { 00:20:05.956 "method": "bdev_nvme_set_options", 00:20:05.956 "params": { 00:20:05.956 "action_on_timeout": "none", 00:20:05.956 "timeout_us": 0, 00:20:05.956 "timeout_admin_us": 0, 00:20:05.956 "keep_alive_timeout_ms": 10000, 00:20:05.956 "arbitration_burst": 0, 00:20:05.956 "low_priority_weight": 0, 00:20:05.956 "medium_priority_weight": 0, 00:20:05.956 "high_priority_weight": 0, 00:20:05.956 "nvme_adminq_poll_period_us": 10000, 00:20:05.956 "nvme_ioq_poll_period_us": 0, 00:20:05.956 "io_queue_requests": 0, 00:20:05.956 "delay_cmd_submit": true, 00:20:05.956 "transport_retry_count": 4, 00:20:05.956 "bdev_retry_count": 3, 00:20:05.956 "transport_ack_timeout": 0, 00:20:05.956 "ctrlr_loss_timeout_sec": 0, 00:20:05.956 "reconnect_delay_sec": 0, 00:20:05.956 "fast_io_fail_timeout_sec": 0, 00:20:05.956 "disable_auto_failback": false, 00:20:05.956 "generate_uuids": false, 00:20:05.956 "transport_tos": 0, 00:20:05.956 "nvme_error_stat": false, 00:20:05.956 "rdma_srq_size": 0, 00:20:05.956 "io_path_stat": false, 00:20:05.957 "allow_accel_sequence": false, 00:20:05.957 "rdma_max_cq_size": 0, 00:20:05.957 "rdma_cm_event_timeout_ms": 0, 00:20:05.957 "dhchap_digests": [ 00:20:05.957 "sha256", 00:20:05.957 "sha384", 00:20:05.957 "sha512" 00:20:05.957 ], 00:20:05.957 "dhchap_dhgroups": [ 00:20:05.957 "null", 00:20:05.957 "ffdhe2048", 00:20:05.957 "ffdhe3072", 00:20:05.957 "ffdhe4096", 00:20:05.957 "ffdhe6144", 00:20:05.957 "ffdhe8192" 00:20:05.957 ] 00:20:05.957 } 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "method": "bdev_nvme_set_hotplug", 00:20:05.957 "params": { 00:20:05.957 "period_us": 100000, 00:20:05.957 "enable": false 00:20:05.957 } 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "method": "bdev_malloc_create", 00:20:05.957 "params": { 00:20:05.957 "name": "malloc0", 00:20:05.957 "num_blocks": 8192, 00:20:05.957 "block_size": 4096, 00:20:05.957 "physical_block_size": 4096, 00:20:05.957 "uuid": "8852c835-9fcb-40cc-9cd4-8dafeed9e0e7", 00:20:05.957 "optimal_io_boundary": 0, 00:20:05.957 "md_size": 0, 00:20:05.957 "dif_type": 0, 00:20:05.957 "dif_is_head_of_md": false, 00:20:05.957 "dif_pi_format": 0 00:20:05.957 } 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "method": "bdev_wait_for_examine" 00:20:05.957 } 00:20:05.957 ] 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "subsystem": "nbd", 00:20:05.957 "config": [] 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "subsystem": "scheduler", 00:20:05.957 "config": [ 00:20:05.957 { 00:20:05.957 "method": "framework_set_scheduler", 00:20:05.957 "params": { 00:20:05.957 "name": "static" 00:20:05.957 } 00:20:05.957 } 00:20:05.957 ] 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "subsystem": "nvmf", 00:20:05.957 "config": [ 00:20:05.957 { 00:20:05.957 "method": "nvmf_set_config", 00:20:05.957 "params": { 00:20:05.957 "discovery_filter": "match_any", 00:20:05.957 "admin_cmd_passthru": { 00:20:05.957 "identify_ctrlr": false 00:20:05.957 }, 00:20:05.957 "dhchap_digests": [ 00:20:05.957 "sha256", 00:20:05.957 "sha384", 00:20:05.957 "sha512" 00:20:05.957 ], 00:20:05.957 "dhchap_dhgroups": [ 00:20:05.957 "null", 00:20:05.957 "ffdhe2048", 00:20:05.957 "ffdhe3072", 00:20:05.957 "ffdhe4096", 00:20:05.957 "ffdhe6144", 00:20:05.957 "ffdhe8192" 00:20:05.957 ] 00:20:05.957 } 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "method": "nvmf_set_max_subsystems", 00:20:05.957 "params": { 00:20:05.957 "max_subsystems": 1024 00:20:05.957 } 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "method": "nvmf_set_crdt", 00:20:05.957 "params": { 00:20:05.957 "crdt1": 0, 00:20:05.957 "crdt2": 0, 00:20:05.957 "crdt3": 0 00:20:05.957 } 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "method": "nvmf_create_transport", 00:20:05.957 "params": { 00:20:05.957 "trtype": "TCP", 00:20:05.957 "max_queue_depth": 128, 00:20:05.957 "max_io_qpairs_per_ctrlr": 127, 00:20:05.957 "in_capsule_data_size": 4096, 00:20:05.957 "max_io_size": 131072, 00:20:05.957 "io_unit_size": 131072, 00:20:05.957 "max_aq_depth": 128, 00:20:05.957 "num_shared_buffers": 511, 00:20:05.957 "buf_cache_size": 4294967295, 00:20:05.957 "dif_insert_or_strip": false, 00:20:05.957 "zcopy": false, 00:20:05.957 "c2h_success": false, 00:20:05.957 "sock_priority": 0, 00:20:05.957 "abort_timeout_sec": 1, 00:20:05.957 "ack_timeout": 0, 00:20:05.957 "data_wr_pool_size": 0 00:20:05.957 } 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "method": "nvmf_create_subsystem", 00:20:05.957 "params": { 00:20:05.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.957 "allow_any_host": false, 00:20:05.957 "serial_number": "00000000000000000000", 00:20:05.957 "model_number": "SPDK bdev Controller", 00:20:05.957 "max_namespaces": 32, 00:20:05.957 "min_cntlid": 1, 00:20:05.957 "max_cntlid": 65519, 00:20:05.957 "ana_reporting": false 00:20:05.957 } 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "method": "nvmf_subsystem_add_host", 00:20:05.957 "params": { 00:20:05.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.957 "host": "nqn.2016-06.io.spdk:host1", 00:20:05.957 "psk": "key0" 00:20:05.957 } 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "method": "nvmf_subsystem_add_ns", 00:20:05.957 "params": { 00:20:05.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.957 "namespace": { 00:20:05.957 "nsid": 1, 00:20:05.957 "bdev_name": "malloc0", 00:20:05.957 "nguid": "8852C8359FCB40CC9CD48DAFEED9E0E7", 00:20:05.957 "uuid": "8852c835-9fcb-40cc-9cd4-8dafeed9e0e7", 00:20:05.957 "no_auto_visible": false 00:20:05.957 } 00:20:05.957 } 00:20:05.957 }, 00:20:05.957 { 00:20:05.957 "method": "nvmf_subsystem_add_listener", 00:20:05.957 "params": { 00:20:05.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.957 "listen_address": { 00:20:05.957 "trtype": "TCP", 00:20:05.957 "adrfam": "IPv4", 00:20:05.957 "traddr": "10.0.0.2", 00:20:05.957 "trsvcid": "4420" 00:20:05.957 }, 00:20:05.957 "secure_channel": false, 00:20:05.957 "sock_impl": "ssl" 00:20:05.957 } 00:20:05.957 } 00:20:05.957 ] 00:20:05.957 } 00:20:05.957 ] 00:20:05.957 }' 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1386508 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1386508 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1386508 ']' 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.957 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.216 [2024-11-28 08:18:48.257291] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:20:06.216 [2024-11-28 08:18:48.257343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.216 [2024-11-28 08:18:48.323590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.216 [2024-11-28 08:18:48.366309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.216 [2024-11-28 08:18:48.366343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.216 [2024-11-28 08:18:48.366351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.216 [2024-11-28 08:18:48.366358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.216 [2024-11-28 08:18:48.366363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.216 [2024-11-28 08:18:48.366959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.476 [2024-11-28 08:18:48.581803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.476 [2024-11-28 08:18:48.613839] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.476 [2024-11-28 08:18:48.614076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1386749 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1386749 /var/tmp/bdevperf.sock 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1386749 ']' 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.045 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:07.045 "subsystems": [ 00:20:07.045 { 00:20:07.045 "subsystem": "keyring", 00:20:07.045 "config": [ 00:20:07.045 { 00:20:07.045 "method": "keyring_file_add_key", 00:20:07.045 "params": { 00:20:07.045 "name": "key0", 00:20:07.045 "path": "/tmp/tmp.FsRSIrfvHn" 00:20:07.045 } 00:20:07.045 } 00:20:07.045 ] 00:20:07.045 }, 00:20:07.045 { 00:20:07.045 "subsystem": "iobuf", 00:20:07.045 "config": [ 00:20:07.045 { 00:20:07.045 "method": "iobuf_set_options", 00:20:07.045 "params": { 00:20:07.045 "small_pool_count": 8192, 00:20:07.045 "large_pool_count": 1024, 00:20:07.045 "small_bufsize": 8192, 00:20:07.045 "large_bufsize": 135168, 00:20:07.045 "enable_numa": false 00:20:07.045 } 00:20:07.045 } 00:20:07.045 ] 00:20:07.045 }, 00:20:07.045 { 00:20:07.045 "subsystem": "sock", 00:20:07.045 "config": [ 00:20:07.045 { 00:20:07.045 "method": "sock_set_default_impl", 00:20:07.045 "params": { 00:20:07.045 "impl_name": "posix" 00:20:07.045 } 00:20:07.045 }, 00:20:07.045 { 00:20:07.045 "method": "sock_impl_set_options", 00:20:07.045 "params": { 00:20:07.045 "impl_name": "ssl", 00:20:07.045 "recv_buf_size": 4096, 00:20:07.045 "send_buf_size": 4096, 00:20:07.045 "enable_recv_pipe": true, 00:20:07.045 "enable_quickack": false, 00:20:07.045 "enable_placement_id": 0, 00:20:07.045 "enable_zerocopy_send_server": true, 00:20:07.045 "enable_zerocopy_send_client": false, 00:20:07.045 "zerocopy_threshold": 0, 00:20:07.045 "tls_version": 0, 00:20:07.045 "enable_ktls": false 00:20:07.045 } 00:20:07.045 }, 00:20:07.046 { 00:20:07.046 "method": "sock_impl_set_options", 00:20:07.046 "params": { 00:20:07.046 "impl_name": "posix", 00:20:07.046 "recv_buf_size": 2097152, 00:20:07.046 "send_buf_size": 2097152, 00:20:07.046 "enable_recv_pipe": true, 00:20:07.046 "enable_quickack": false, 00:20:07.046 "enable_placement_id": 0, 00:20:07.046 "enable_zerocopy_send_server": true, 00:20:07.046 "enable_zerocopy_send_client": false, 00:20:07.046 "zerocopy_threshold": 0, 00:20:07.046 "tls_version": 0, 00:20:07.046 "enable_ktls": false 00:20:07.046 } 00:20:07.046 } 00:20:07.046 ] 00:20:07.046 }, 00:20:07.046 { 00:20:07.046 "subsystem": "vmd", 00:20:07.046 "config": [] 00:20:07.046 }, 00:20:07.046 { 00:20:07.046 "subsystem": "accel", 00:20:07.046 "config": [ 00:20:07.046 { 00:20:07.046 "method": "accel_set_options", 00:20:07.046 "params": { 00:20:07.046 "small_cache_size": 128, 00:20:07.046 "large_cache_size": 16, 00:20:07.046 "task_count": 2048, 00:20:07.046 "sequence_count": 2048, 00:20:07.046 "buf_count": 2048 00:20:07.046 } 00:20:07.046 } 00:20:07.046 ] 00:20:07.046 }, 00:20:07.046 { 00:20:07.046 "subsystem": "bdev", 00:20:07.046 "config": [ 00:20:07.046 { 00:20:07.046 "method": "bdev_set_options", 00:20:07.046 "params": { 00:20:07.046 "bdev_io_pool_size": 65535, 00:20:07.046 "bdev_io_cache_size": 256, 00:20:07.046 "bdev_auto_examine": true, 00:20:07.046 "iobuf_small_cache_size": 128, 00:20:07.046 "iobuf_large_cache_size": 16 00:20:07.046 } 00:20:07.046 }, 00:20:07.046 { 00:20:07.046 "method": "bdev_raid_set_options", 00:20:07.046 "params": { 00:20:07.046 "process_window_size_kb": 1024, 00:20:07.046 "process_max_bandwidth_mb_sec": 0 00:20:07.046 } 00:20:07.046 }, 00:20:07.046 { 00:20:07.046 "method": "bdev_iscsi_set_options", 00:20:07.046 "params": { 00:20:07.046 "timeout_sec": 30 00:20:07.046 } 00:20:07.046 }, 00:20:07.046 { 00:20:07.046 "method": "bdev_nvme_set_options", 00:20:07.046 "params": { 00:20:07.046 "action_on_timeout": "none", 00:20:07.046 "timeout_us": 0, 00:20:07.046 "timeout_admin_us": 0, 00:20:07.046 "keep_alive_timeout_ms": 10000, 00:20:07.046 "arbitration_burst": 0, 00:20:07.046 "low_priority_weight": 0, 00:20:07.046 "medium_priority_weight": 0, 00:20:07.046 "high_priority_weight": 0, 00:20:07.046 "nvme_adminq_poll_period_us": 10000, 00:20:07.046 "nvme_ioq_poll_period_us": 0, 00:20:07.046 "io_queue_requests": 512, 00:20:07.046 "delay_cmd_submit": true, 00:20:07.046 "transport_retry_count": 4, 00:20:07.046 "bdev_retry_count": 3, 00:20:07.046 "transport_ack_timeout": 0, 00:20:07.046 "ctrlr_loss_timeout_sec": 0, 00:20:07.046 "reconnect_delay_sec": 0, 00:20:07.046 "fast_io_fail_timeout_sec": 0, 00:20:07.046 "disable_auto_failback": false, 00:20:07.046 "generate_uuids": false, 00:20:07.046 "transport_tos": 0, 00:20:07.046 "nvme_error_stat": false, 00:20:07.046 "rdma_srq_size": 0, 00:20:07.046 "io_path_stat": false, 00:20:07.046 "allow_accel_sequence": false, 00:20:07.046 "rdma_max_cq_size": 0, 00:20:07.046 "rdma_cm_event_timeout_ms": 0, 00:20:07.046 "dhchap_digests": [ 00:20:07.046 "sha256", 00:20:07.046 "sha384", 00:20:07.046 "sha512" 00:20:07.046 ], 00:20:07.046 "dhchap_dhgroups": [ 00:20:07.046 "null", 00:20:07.046 "ffdhe2048", 00:20:07.046 "ffdhe3072", 00:20:07.046 "ffdhe4096", 00:20:07.046 "ffdhe6144", 00:20:07.046 "ffdhe8192" 00:20:07.046 ] 00:20:07.046 } 00:20:07.046 }, 00:20:07.046 { 00:20:07.046 "method": "bdev_nvme_attach_controller", 00:20:07.046 "params": { 00:20:07.046 "name": "nvme0", 00:20:07.046 "trtype": "TCP", 00:20:07.046 "adrfam": "IPv4", 00:20:07.046 "traddr": "10.0.0.2", 00:20:07.046 "trsvcid": "4420", 00:20:07.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.046 "prchk_reftag": false, 00:20:07.046 "prchk_guard": false, 00:20:07.046 "ctrlr_loss_timeout_sec": 0, 00:20:07.046 "reconnect_delay_sec": 0, 00:20:07.046 "fast_io_fail_timeout_sec": 0, 00:20:07.046 "psk": "key0", 00:20:07.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.046 "hdgst": false, 00:20:07.046 "ddgst": false, 00:20:07.046 "multipath": "multipath" 00:20:07.046 } 00:20:07.046 }, 00:20:07.046 { 00:20:07.046 "method": "bdev_nvme_set_hotplug", 00:20:07.046 "params": { 00:20:07.046 "period_us": 100000, 00:20:07.046 "enable": false 00:20:07.046 } 00:20:07.046 }, 00:20:07.046 { 00:20:07.046 "method": "bdev_enable_histogram", 00:20:07.046 "params": { 00:20:07.046 "name": "nvme0n1", 00:20:07.046 "enable": true 00:20:07.046 } 00:20:07.046 }, 00:20:07.046 { 00:20:07.046 "method": "bdev_wait_for_examine" 00:20:07.046 } 00:20:07.046 ] 00:20:07.046 }, 00:20:07.046 { 00:20:07.046 "subsystem": "nbd", 00:20:07.046 "config": [] 00:20:07.046 } 00:20:07.046 ] 00:20:07.046 }' 00:20:07.046 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.046 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.046 [2024-11-28 08:18:49.173403] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:20:07.046 [2024-11-28 08:18:49.173449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386749 ] 00:20:07.046 [2024-11-28 08:18:49.235431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.046 [2024-11-28 08:18:49.278486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.306 [2024-11-28 08:18:49.433537] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.874 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.874 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:07.874 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:07.874 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:08.132 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.132 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.132 Running I/O for 1 seconds... 00:20:09.070 5225.00 IOPS, 20.41 MiB/s 00:20:09.070 Latency(us) 00:20:09.070 [2024-11-28T07:18:51.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.070 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:09.070 Verification LBA range: start 0x0 length 0x2000 00:20:09.070 nvme0n1 : 1.02 5257.24 20.54 0.00 0.00 24139.22 6525.11 27354.16 00:20:09.070 [2024-11-28T07:18:51.339Z] =================================================================================================================== 00:20:09.070 [2024-11-28T07:18:51.339Z] Total : 5257.24 20.54 0.00 0.00 24139.22 6525.11 27354.16 00:20:09.070 { 00:20:09.070 "results": [ 00:20:09.070 { 00:20:09.070 "job": "nvme0n1", 00:20:09.070 "core_mask": "0x2", 00:20:09.070 "workload": "verify", 00:20:09.070 "status": "finished", 00:20:09.070 "verify_range": { 00:20:09.070 "start": 0, 00:20:09.070 "length": 8192 00:20:09.070 }, 00:20:09.070 "queue_depth": 128, 00:20:09.070 "io_size": 4096, 00:20:09.070 "runtime": 1.018215, 00:20:09.070 "iops": 5257.239384609341, 00:20:09.070 "mibps": 20.536091346130238, 00:20:09.070 "io_failed": 0, 00:20:09.070 "io_timeout": 0, 00:20:09.070 "avg_latency_us": 24139.222859185018, 00:20:09.070 "min_latency_us": 6525.106086956522, 00:20:09.070 "max_latency_us": 27354.15652173913 00:20:09.070 } 00:20:09.070 ], 00:20:09.070 "core_count": 1 00:20:09.070 } 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:09.329 nvmf_trace.0 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1386749 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1386749 ']' 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1386749 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1386749 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1386749' 00:20:09.329 killing process with pid 1386749 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1386749 00:20:09.329 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.329 00:20:09.329 Latency(us) 00:20:09.329 [2024-11-28T07:18:51.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.329 [2024-11-28T07:18:51.598Z] =================================================================================================================== 00:20:09.329 [2024-11-28T07:18:51.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.329 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1386749 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:09.589 rmmod nvme_tcp 00:20:09.589 rmmod nvme_fabrics 00:20:09.589 rmmod nvme_keyring 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1386508 ']' 00:20:09.589 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1386508 00:20:09.590 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1386508 ']' 00:20:09.590 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1386508 00:20:09.590 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:09.590 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.590 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1386508 00:20:09.590 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.590 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.590 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1386508' 00:20:09.590 killing process with pid 1386508 00:20:09.590 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1386508 00:20:09.590 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1386508 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.850 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.756 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:11.756 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.iFpordaGHp /tmp/tmp.Ene3nLod55 /tmp/tmp.FsRSIrfvHn 00:20:11.756 00:20:11.756 real 1m17.791s 00:20:11.756 user 1m59.792s 00:20:11.756 sys 0m29.592s 00:20:11.756 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.756 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.756 ************************************ 00:20:11.756 END TEST nvmf_tls 00:20:11.756 ************************************ 00:20:12.015 08:18:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:12.015 08:18:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.016 ************************************ 00:20:12.016 START TEST nvmf_fips 00:20:12.016 ************************************ 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:12.016 * Looking for test storage... 00:20:12.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:12.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.016 --rc genhtml_branch_coverage=1 00:20:12.016 --rc genhtml_function_coverage=1 00:20:12.016 --rc genhtml_legend=1 00:20:12.016 --rc geninfo_all_blocks=1 00:20:12.016 --rc geninfo_unexecuted_blocks=1 00:20:12.016 00:20:12.016 ' 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:12.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.016 --rc genhtml_branch_coverage=1 00:20:12.016 --rc genhtml_function_coverage=1 00:20:12.016 --rc genhtml_legend=1 00:20:12.016 --rc geninfo_all_blocks=1 00:20:12.016 --rc geninfo_unexecuted_blocks=1 00:20:12.016 00:20:12.016 ' 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:12.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.016 --rc genhtml_branch_coverage=1 00:20:12.016 --rc genhtml_function_coverage=1 00:20:12.016 --rc genhtml_legend=1 00:20:12.016 --rc geninfo_all_blocks=1 00:20:12.016 --rc geninfo_unexecuted_blocks=1 00:20:12.016 00:20:12.016 ' 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:12.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.016 --rc genhtml_branch_coverage=1 00:20:12.016 --rc genhtml_function_coverage=1 00:20:12.016 --rc genhtml_legend=1 00:20:12.016 --rc geninfo_all_blocks=1 00:20:12.016 --rc geninfo_unexecuted_blocks=1 00:20:12.016 00:20:12.016 ' 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:12.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:12.016 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:12.017 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:12.017 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:12.277 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:12.278 Error setting digest 00:20:12.278 40828B69E27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:12.278 40828B69E27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:12.278 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:18.849 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:18.849 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:18.849 Found net devices under 0000:86:00.0: cvl_0_0 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:18.849 Found net devices under 0000:86:00.1: cvl_0_1 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:18.849 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:18.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:20:18.850 00:20:18.850 --- 10.0.0.2 ping statistics --- 00:20:18.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.850 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:20:18.850 00:20:18.850 --- 10.0.0.1 ping statistics --- 00:20:18.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.850 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1390771 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1390771 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1390771 ']' 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:18.850 [2024-11-28 08:19:00.427627] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:20:18.850 [2024-11-28 08:19:00.427681] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.850 [2024-11-28 08:19:00.492012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.850 [2024-11-28 08:19:00.533754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.850 [2024-11-28 08:19:00.533788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.850 [2024-11-28 08:19:00.533796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.850 [2024-11-28 08:19:00.533802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.850 [2024-11-28 08:19:00.533807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.850 [2024-11-28 08:19:00.534378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.oEz 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.oEz 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.oEz 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.oEz 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:18.850 [2024-11-28 08:19:00.848623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.850 [2024-11-28 08:19:00.864633] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.850 [2024-11-28 08:19:00.864836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.850 malloc0 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1390804 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1390804 /var/tmp/bdevperf.sock 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1390804 ']' 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.850 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:18.850 [2024-11-28 08:19:00.984978] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:20:18.850 [2024-11-28 08:19:00.985031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1390804 ] 00:20:18.850 [2024-11-28 08:19:01.044257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.850 [2024-11-28 08:19:01.086579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.110 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.110 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:19.110 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.oEz 00:20:19.369 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:19.369 [2024-11-28 08:19:01.563812] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.628 TLSTESTn1 00:20:19.628 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:19.628 Running I/O for 10 seconds... 00:20:21.501 5437.00 IOPS, 21.24 MiB/s [2024-11-28T07:19:05.149Z] 5482.00 IOPS, 21.41 MiB/s [2024-11-28T07:19:06.086Z] 5480.00 IOPS, 21.41 MiB/s [2024-11-28T07:19:07.022Z] 5383.25 IOPS, 21.03 MiB/s [2024-11-28T07:19:07.990Z] 5132.80 IOPS, 20.05 MiB/s [2024-11-28T07:19:08.928Z] 4961.67 IOPS, 19.38 MiB/s [2024-11-28T07:19:09.866Z] 4839.57 IOPS, 18.90 MiB/s [2024-11-28T07:19:10.803Z] 4759.12 IOPS, 18.59 MiB/s [2024-11-28T07:19:12.182Z] 4697.67 IOPS, 18.35 MiB/s [2024-11-28T07:19:12.182Z] 4629.40 IOPS, 18.08 MiB/s 00:20:29.913 Latency(us) 00:20:29.913 [2024-11-28T07:19:12.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.913 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:29.913 Verification LBA range: start 0x0 length 0x2000 00:20:29.913 TLSTESTn1 : 10.02 4631.86 18.09 0.00 0.00 27593.89 7038.00 50149.29 00:20:29.913 [2024-11-28T07:19:12.182Z] =================================================================================================================== 00:20:29.913 [2024-11-28T07:19:12.182Z] Total : 4631.86 18.09 0.00 0.00 27593.89 7038.00 50149.29 00:20:29.913 { 00:20:29.913 "results": [ 00:20:29.913 { 00:20:29.913 "job": "TLSTESTn1", 00:20:29.913 "core_mask": "0x4", 00:20:29.913 "workload": "verify", 00:20:29.913 "status": "finished", 00:20:29.913 "verify_range": { 00:20:29.913 "start": 0, 00:20:29.913 "length": 8192 00:20:29.913 }, 00:20:29.913 "queue_depth": 128, 00:20:29.913 "io_size": 4096, 00:20:29.913 "runtime": 10.022318, 00:20:29.913 "iops": 4631.862609029169, 00:20:29.913 "mibps": 18.093213316520192, 00:20:29.913 "io_failed": 0, 00:20:29.913 "io_timeout": 0, 00:20:29.913 "avg_latency_us": 27593.89251617955, 00:20:29.913 "min_latency_us": 7037.99652173913, 00:20:29.913 "max_latency_us": 50149.28695652174 00:20:29.913 } 00:20:29.913 ], 00:20:29.913 "core_count": 1 00:20:29.913 } 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:29.913 nvmf_trace.0 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1390804 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1390804 ']' 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1390804 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1390804 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1390804' 00:20:29.913 killing process with pid 1390804 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1390804 00:20:29.913 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.913 00:20:29.913 Latency(us) 00:20:29.913 [2024-11-28T07:19:12.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.913 [2024-11-28T07:19:12.182Z] =================================================================================================================== 00:20:29.913 [2024-11-28T07:19:12.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.913 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1390804 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:29.913 rmmod nvme_tcp 00:20:29.913 rmmod nvme_fabrics 00:20:29.913 rmmod nvme_keyring 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1390771 ']' 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1390771 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1390771 ']' 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1390771 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.913 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1390771 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1390771' 00:20:30.173 killing process with pid 1390771 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1390771 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1390771 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.173 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.712 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:32.712 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.oEz 00:20:32.712 00:20:32.712 real 0m20.384s 00:20:32.712 user 0m20.824s 00:20:32.712 sys 0m10.012s 00:20:32.712 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.712 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:32.712 ************************************ 00:20:32.712 END TEST nvmf_fips 00:20:32.712 ************************************ 00:20:32.712 08:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:32.712 08:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:32.712 08:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.712 08:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:32.712 ************************************ 00:20:32.712 START TEST nvmf_control_msg_list 00:20:32.712 ************************************ 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:32.713 * Looking for test storage... 00:20:32.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:32.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.713 --rc genhtml_branch_coverage=1 00:20:32.713 --rc genhtml_function_coverage=1 00:20:32.713 --rc genhtml_legend=1 00:20:32.713 --rc geninfo_all_blocks=1 00:20:32.713 --rc geninfo_unexecuted_blocks=1 00:20:32.713 00:20:32.713 ' 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:32.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.713 --rc genhtml_branch_coverage=1 00:20:32.713 --rc genhtml_function_coverage=1 00:20:32.713 --rc genhtml_legend=1 00:20:32.713 --rc geninfo_all_blocks=1 00:20:32.713 --rc geninfo_unexecuted_blocks=1 00:20:32.713 00:20:32.713 ' 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:32.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.713 --rc genhtml_branch_coverage=1 00:20:32.713 --rc genhtml_function_coverage=1 00:20:32.713 --rc genhtml_legend=1 00:20:32.713 --rc geninfo_all_blocks=1 00:20:32.713 --rc geninfo_unexecuted_blocks=1 00:20:32.713 00:20:32.713 ' 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:32.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.713 --rc genhtml_branch_coverage=1 00:20:32.713 --rc genhtml_function_coverage=1 00:20:32.713 --rc genhtml_legend=1 00:20:32.713 --rc geninfo_all_blocks=1 00:20:32.713 --rc geninfo_unexecuted_blocks=1 00:20:32.713 00:20:32.713 ' 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:32.713 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:32.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:32.714 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:37.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:37.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:37.990 Found net devices under 0000:86:00.0: cvl_0_0 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:37.990 Found net devices under 0000:86:00.1: cvl_0_1 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:37.990 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:37.991 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:20:38.250 00:20:38.250 --- 10.0.0.2 ping statistics --- 00:20:38.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.250 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:20:38.250 00:20:38.250 --- 10.0.0.1 ping statistics --- 00:20:38.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.250 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1396159 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1396159 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1396159 ']' 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.250 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.250 [2024-11-28 08:19:20.393012] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:20:38.251 [2024-11-28 08:19:20.393059] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.251 [2024-11-28 08:19:20.460371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.251 [2024-11-28 08:19:20.502667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.251 [2024-11-28 08:19:20.502702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.251 [2024-11-28 08:19:20.502709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.251 [2024-11-28 08:19:20.502716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.251 [2024-11-28 08:19:20.502721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.251 [2024-11-28 08:19:20.503297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.510 [2024-11-28 08:19:20.641482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.510 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.511 Malloc0 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:38.511 [2024-11-28 08:19:20.681987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1396192 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1396194 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1396195 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1396192 00:20:38.511 08:19:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.511 [2024-11-28 08:19:20.740299] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:38.511 [2024-11-28 08:19:20.750586] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:38.511 [2024-11-28 08:19:20.750738] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:39.891 Initializing NVMe Controllers 00:20:39.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:39.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:39.891 Initialization complete. Launching workers. 00:20:39.891 ======================================================== 00:20:39.891 Latency(us) 00:20:39.891 Device Information : IOPS MiB/s Average min max 00:20:39.891 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6108.00 23.86 163.35 139.56 414.32 00:20:39.891 ======================================================== 00:20:39.891 Total : 6108.00 23.86 163.35 139.56 414.32 00:20:39.891 00:20:39.891 Initializing NVMe Controllers 00:20:39.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:39.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:39.891 Initialization complete. Launching workers. 00:20:39.891 ======================================================== 00:20:39.891 Latency(us) 00:20:39.891 Device Information : IOPS MiB/s Average min max 00:20:39.891 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40967.29 40636.42 41900.36 00:20:39.891 ======================================================== 00:20:39.891 Total : 25.00 0.10 40967.29 40636.42 41900.36 00:20:39.891 00:20:39.891 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1396194 00:20:39.891 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1396195 00:20:39.891 Initializing NVMe Controllers 00:20:39.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:39.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:39.891 Initialization complete. Launching workers. 00:20:39.891 ======================================================== 00:20:39.891 Latency(us) 00:20:39.891 Device Information : IOPS MiB/s Average min max 00:20:39.891 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5986.00 23.38 166.68 151.40 427.15 00:20:39.891 ======================================================== 00:20:39.891 Total : 5986.00 23.38 166.68 151.40 427.15 00:20:39.891 00:20:39.891 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:39.891 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:39.891 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.891 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:39.891 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.891 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:39.891 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.891 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.891 rmmod nvme_tcp 00:20:39.891 rmmod nvme_fabrics 00:20:39.891 rmmod nvme_keyring 00:20:39.891 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.891 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:39.891 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:39.891 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1396159 ']' 00:20:39.891 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1396159 00:20:39.891 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1396159 ']' 00:20:39.891 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1396159 00:20:39.891 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:39.891 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.891 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1396159 00:20:39.892 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.892 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.892 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1396159' 00:20:39.892 killing process with pid 1396159 00:20:39.892 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1396159 00:20:39.892 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1396159 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.151 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.686 00:20:42.686 real 0m9.803s 00:20:42.686 user 0m6.551s 00:20:42.686 sys 0m5.189s 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.686 ************************************ 00:20:42.686 END TEST nvmf_control_msg_list 00:20:42.686 ************************************ 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:42.686 ************************************ 00:20:42.686 START TEST nvmf_wait_for_buf 00:20:42.686 ************************************ 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:42.686 * Looking for test storage... 00:20:42.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:42.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.686 --rc genhtml_branch_coverage=1 00:20:42.686 --rc genhtml_function_coverage=1 00:20:42.686 --rc genhtml_legend=1 00:20:42.686 --rc geninfo_all_blocks=1 00:20:42.686 --rc geninfo_unexecuted_blocks=1 00:20:42.686 00:20:42.686 ' 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:42.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.686 --rc genhtml_branch_coverage=1 00:20:42.686 --rc genhtml_function_coverage=1 00:20:42.686 --rc genhtml_legend=1 00:20:42.686 --rc geninfo_all_blocks=1 00:20:42.686 --rc geninfo_unexecuted_blocks=1 00:20:42.686 00:20:42.686 ' 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:42.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.686 --rc genhtml_branch_coverage=1 00:20:42.686 --rc genhtml_function_coverage=1 00:20:42.686 --rc genhtml_legend=1 00:20:42.686 --rc geninfo_all_blocks=1 00:20:42.686 --rc geninfo_unexecuted_blocks=1 00:20:42.686 00:20:42.686 ' 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:42.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.686 --rc genhtml_branch_coverage=1 00:20:42.686 --rc genhtml_function_coverage=1 00:20:42.686 --rc genhtml_legend=1 00:20:42.686 --rc geninfo_all_blocks=1 00:20:42.686 --rc geninfo_unexecuted_blocks=1 00:20:42.686 00:20:42.686 ' 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.686 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.687 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:47.963 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:47.963 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.963 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:47.964 Found net devices under 0000:86:00.0: cvl_0_0 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:47.964 Found net devices under 0000:86:00.1: cvl_0_1 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:47.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:20:47.964 00:20:47.964 --- 10.0.0.2 ping statistics --- 00:20:47.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.964 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:20:47.964 00:20:47.964 --- 10.0.0.1 ping statistics --- 00:20:47.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.964 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1399928 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1399928 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1399928 ']' 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:47.964 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.965 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 [2024-11-28 08:19:29.834638] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:20:47.965 [2024-11-28 08:19:29.834683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.965 [2024-11-28 08:19:29.900534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.965 [2024-11-28 08:19:29.941792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.965 [2024-11-28 08:19:29.941827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.965 [2024-11-28 08:19:29.941835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.965 [2024-11-28 08:19:29.941841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.965 [2024-11-28 08:19:29.941847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.965 [2024-11-28 08:19:29.942394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.965 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.965 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:47.965 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.965 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.965 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 Malloc0 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 [2024-11-28 08:19:30.111395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.965 [2024-11-28 08:19:30.135580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.965 08:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.965 [2024-11-28 08:19:30.222046] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:49.343 Initializing NVMe Controllers 00:20:49.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:49.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:49.343 Initialization complete. Launching workers. 00:20:49.343 ======================================================== 00:20:49.343 Latency(us) 00:20:49.343 Device Information : IOPS MiB/s Average min max 00:20:49.343 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 34.91 4.36 119043.54 7263.51 194520.16 00:20:49.343 ======================================================== 00:20:49.343 Total : 34.91 4.36 119043.54 7263.51 194520.16 00:20:49.343 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=534 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 534 -eq 0 ]] 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.603 rmmod nvme_tcp 00:20:49.603 rmmod nvme_fabrics 00:20:49.603 rmmod nvme_keyring 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1399928 ']' 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1399928 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1399928 ']' 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1399928 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1399928 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1399928' 00:20:49.603 killing process with pid 1399928 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1399928 00:20:49.603 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1399928 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.863 08:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.770 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:51.770 00:20:51.770 real 0m9.635s 00:20:51.770 user 0m3.646s 00:20:51.770 sys 0m4.400s 00:20:51.770 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.770 08:19:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:51.770 ************************************ 00:20:51.770 END TEST nvmf_wait_for_buf 00:20:51.770 ************************************ 00:20:52.028 08:19:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:52.028 08:19:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:52.029 08:19:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:52.029 08:19:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:52.029 08:19:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:52.029 08:19:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:57.322 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:57.322 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:57.322 Found net devices under 0000:86:00.0: cvl_0_0 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:57.322 Found net devices under 0000:86:00.1: cvl_0_1 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:57.322 ************************************ 00:20:57.322 START TEST nvmf_perf_adq 00:20:57.322 ************************************ 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:57.322 * Looking for test storage... 00:20:57.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:57.322 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:57.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.322 --rc genhtml_branch_coverage=1 00:20:57.322 --rc genhtml_function_coverage=1 00:20:57.322 --rc genhtml_legend=1 00:20:57.322 --rc geninfo_all_blocks=1 00:20:57.322 --rc geninfo_unexecuted_blocks=1 00:20:57.322 00:20:57.322 ' 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:57.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.322 --rc genhtml_branch_coverage=1 00:20:57.322 --rc genhtml_function_coverage=1 00:20:57.322 --rc genhtml_legend=1 00:20:57.322 --rc geninfo_all_blocks=1 00:20:57.322 --rc geninfo_unexecuted_blocks=1 00:20:57.322 00:20:57.322 ' 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:57.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.322 --rc genhtml_branch_coverage=1 00:20:57.322 --rc genhtml_function_coverage=1 00:20:57.322 --rc genhtml_legend=1 00:20:57.322 --rc geninfo_all_blocks=1 00:20:57.322 --rc geninfo_unexecuted_blocks=1 00:20:57.322 00:20:57.322 ' 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:57.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.322 --rc genhtml_branch_coverage=1 00:20:57.322 --rc genhtml_function_coverage=1 00:20:57.322 --rc genhtml_legend=1 00:20:57.322 --rc geninfo_all_blocks=1 00:20:57.322 --rc geninfo_unexecuted_blocks=1 00:20:57.322 00:20:57.322 ' 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.322 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:57.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:57.323 08:19:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:02.599 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:02.599 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.599 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:02.600 Found net devices under 0000:86:00.0: cvl_0_0 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:02.600 Found net devices under 0000:86:00.1: cvl_0_1 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:02.600 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:03.536 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:05.443 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:10.718 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.718 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:10.719 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:10.719 Found net devices under 0000:86:00.0: cvl_0_0 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:10.719 Found net devices under 0000:86:00.1: cvl_0_1 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:21:10.719 00:21:10.719 --- 10.0.0.2 ping statistics --- 00:21:10.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.719 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:21:10.719 00:21:10.719 --- 10.0.0.1 ping statistics --- 00:21:10.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.719 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1408047 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1408047 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1408047 ']' 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.719 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.719 [2024-11-28 08:19:52.911822] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:21:10.719 [2024-11-28 08:19:52.911864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.719 [2024-11-28 08:19:52.978480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.979 [2024-11-28 08:19:53.023356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.979 [2024-11-28 08:19:53.023395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.979 [2024-11-28 08:19:53.023402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.979 [2024-11-28 08:19:53.023408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.979 [2024-11-28 08:19:53.023414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.979 [2024-11-28 08:19:53.024991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.979 [2024-11-28 08:19:53.025011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.979 [2024-11-28 08:19:53.025079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.979 [2024-11-28 08:19:53.025080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.979 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.238 [2024-11-28 08:19:53.252419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.238 Malloc1 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.238 [2024-11-28 08:19:53.313687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1408244 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:11.238 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:13.143 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:13.143 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.143 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.143 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.143 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:13.143 "tick_rate": 2300000000, 00:21:13.143 "poll_groups": [ 00:21:13.143 { 00:21:13.143 "name": "nvmf_tgt_poll_group_000", 00:21:13.143 "admin_qpairs": 1, 00:21:13.143 "io_qpairs": 1, 00:21:13.143 "current_admin_qpairs": 1, 00:21:13.143 "current_io_qpairs": 1, 00:21:13.143 "pending_bdev_io": 0, 00:21:13.143 "completed_nvme_io": 20050, 00:21:13.143 "transports": [ 00:21:13.143 { 00:21:13.143 "trtype": "TCP" 00:21:13.143 } 00:21:13.143 ] 00:21:13.143 }, 00:21:13.143 { 00:21:13.143 "name": "nvmf_tgt_poll_group_001", 00:21:13.143 "admin_qpairs": 0, 00:21:13.143 "io_qpairs": 1, 00:21:13.143 "current_admin_qpairs": 0, 00:21:13.143 "current_io_qpairs": 1, 00:21:13.143 "pending_bdev_io": 0, 00:21:13.143 "completed_nvme_io": 20258, 00:21:13.143 "transports": [ 00:21:13.143 { 00:21:13.143 "trtype": "TCP" 00:21:13.143 } 00:21:13.143 ] 00:21:13.143 }, 00:21:13.143 { 00:21:13.143 "name": "nvmf_tgt_poll_group_002", 00:21:13.143 "admin_qpairs": 0, 00:21:13.143 "io_qpairs": 1, 00:21:13.143 "current_admin_qpairs": 0, 00:21:13.143 "current_io_qpairs": 1, 00:21:13.143 "pending_bdev_io": 0, 00:21:13.143 "completed_nvme_io": 20103, 00:21:13.143 "transports": [ 00:21:13.143 { 00:21:13.143 "trtype": "TCP" 00:21:13.143 } 00:21:13.143 ] 00:21:13.143 }, 00:21:13.143 { 00:21:13.143 "name": "nvmf_tgt_poll_group_003", 00:21:13.143 "admin_qpairs": 0, 00:21:13.143 "io_qpairs": 1, 00:21:13.143 "current_admin_qpairs": 0, 00:21:13.143 "current_io_qpairs": 1, 00:21:13.143 "pending_bdev_io": 0, 00:21:13.143 "completed_nvme_io": 19948, 00:21:13.143 "transports": [ 00:21:13.143 { 00:21:13.143 "trtype": "TCP" 00:21:13.143 } 00:21:13.143 ] 00:21:13.143 } 00:21:13.143 ] 00:21:13.143 }' 00:21:13.143 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:13.143 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:13.143 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:13.143 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:13.143 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1408244 00:21:21.265 Initializing NVMe Controllers 00:21:21.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:21.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:21.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:21.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:21.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:21.265 Initialization complete. Launching workers. 00:21:21.265 ======================================================== 00:21:21.265 Latency(us) 00:21:21.265 Device Information : IOPS MiB/s Average min max 00:21:21.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10677.90 41.71 5992.63 1504.28 10654.16 00:21:21.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10745.60 41.98 5956.35 2118.58 10357.17 00:21:21.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10507.10 41.04 6090.43 2080.48 9415.74 00:21:21.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10614.30 41.46 6030.74 2361.61 10666.32 00:21:21.265 ======================================================== 00:21:21.265 Total : 42544.90 166.19 6017.13 1504.28 10666.32 00:21:21.265 00:21:21.265 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:21.265 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.265 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:21.265 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.265 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:21.265 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.265 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.265 rmmod nvme_tcp 00:21:21.265 rmmod nvme_fabrics 00:21:21.265 rmmod nvme_keyring 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1408047 ']' 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1408047 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1408047 ']' 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1408047 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1408047 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1408047' 00:21:21.525 killing process with pid 1408047 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1408047 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1408047 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:21.525 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:21.785 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.785 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:21.785 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.785 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.785 08:20:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.691 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.691 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:23.691 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:23.691 08:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:25.070 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:26.977 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:32.652 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:32.653 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:32.653 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:32.653 Found net devices under 0000:86:00.0: cvl_0_0 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:32.653 Found net devices under 0000:86:00.1: cvl_0_1 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:32.653 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:32.654 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.654 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.654 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.654 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:32.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:21:32.654 00:21:32.654 --- 10.0.0.2 ping statistics --- 00:21:32.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.654 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:21:32.654 00:21:32.654 --- 10.0.0.1 ping statistics --- 00:21:32.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.654 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:32.654 net.core.busy_poll = 1 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:32.654 net.core.busy_read = 1 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1412053 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1412053 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1412053 ']' 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.654 [2024-11-28 08:20:14.433222] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:21:32.654 [2024-11-28 08:20:14.433272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.654 [2024-11-28 08:20:14.499528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.654 [2024-11-28 08:20:14.542453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.654 [2024-11-28 08:20:14.542490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.654 [2024-11-28 08:20:14.542497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.654 [2024-11-28 08:20:14.542503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.654 [2024-11-28 08:20:14.542508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.654 [2024-11-28 08:20:14.544098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.654 [2024-11-28 08:20:14.544198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.654 [2024-11-28 08:20:14.544211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:32.654 [2024-11-28 08:20:14.544212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.654 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.655 [2024-11-28 08:20:14.755630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.655 Malloc1 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.655 [2024-11-28 08:20:14.816182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1412128 00:21:32.655 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:34.671 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:34.671 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.671 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:34.671 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.671 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:34.671 "tick_rate": 2300000000, 00:21:34.671 "poll_groups": [ 00:21:34.671 { 00:21:34.671 "name": "nvmf_tgt_poll_group_000", 00:21:34.671 "admin_qpairs": 1, 00:21:34.671 "io_qpairs": 1, 00:21:34.671 "current_admin_qpairs": 1, 00:21:34.671 "current_io_qpairs": 1, 00:21:34.671 "pending_bdev_io": 0, 00:21:34.671 "completed_nvme_io": 27399, 00:21:34.671 "transports": [ 00:21:34.671 { 00:21:34.671 "trtype": "TCP" 00:21:34.671 } 00:21:34.671 ] 00:21:34.671 }, 00:21:34.671 { 00:21:34.671 "name": "nvmf_tgt_poll_group_001", 00:21:34.671 "admin_qpairs": 0, 00:21:34.671 "io_qpairs": 3, 00:21:34.671 "current_admin_qpairs": 0, 00:21:34.671 "current_io_qpairs": 3, 00:21:34.671 "pending_bdev_io": 0, 00:21:34.671 "completed_nvme_io": 29509, 00:21:34.671 "transports": [ 00:21:34.671 { 00:21:34.671 "trtype": "TCP" 00:21:34.671 } 00:21:34.671 ] 00:21:34.671 }, 00:21:34.671 { 00:21:34.671 "name": "nvmf_tgt_poll_group_002", 00:21:34.671 "admin_qpairs": 0, 00:21:34.671 "io_qpairs": 0, 00:21:34.671 "current_admin_qpairs": 0, 00:21:34.671 "current_io_qpairs": 0, 00:21:34.671 "pending_bdev_io": 0, 00:21:34.671 "completed_nvme_io": 0, 00:21:34.671 "transports": [ 00:21:34.671 { 00:21:34.671 "trtype": "TCP" 00:21:34.671 } 00:21:34.671 ] 00:21:34.671 }, 00:21:34.671 { 00:21:34.671 "name": "nvmf_tgt_poll_group_003", 00:21:34.671 "admin_qpairs": 0, 00:21:34.671 "io_qpairs": 0, 00:21:34.671 "current_admin_qpairs": 0, 00:21:34.671 "current_io_qpairs": 0, 00:21:34.671 "pending_bdev_io": 0, 00:21:34.671 "completed_nvme_io": 0, 00:21:34.671 "transports": [ 00:21:34.671 { 00:21:34.671 "trtype": "TCP" 00:21:34.671 } 00:21:34.671 ] 00:21:34.671 } 00:21:34.671 ] 00:21:34.671 }' 00:21:34.671 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:34.671 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:34.671 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:34.671 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:34.671 08:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1412128 00:21:42.783 Initializing NVMe Controllers 00:21:42.783 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:42.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:42.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:42.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:42.783 Initialization complete. Launching workers. 00:21:42.783 ======================================================== 00:21:42.783 Latency(us) 00:21:42.783 Device Information : IOPS MiB/s Average min max 00:21:42.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5166.70 20.18 12385.34 1814.88 59947.14 00:21:42.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5304.50 20.72 12063.64 1362.48 58728.95 00:21:42.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 15024.00 58.69 4259.03 1640.94 44979.87 00:21:42.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4829.50 18.87 13296.22 1822.03 59954.68 00:21:42.783 ======================================================== 00:21:42.783 Total : 30324.70 118.46 8448.05 1362.48 59954.68 00:21:42.783 00:21:42.783 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:42.783 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:42.783 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:42.783 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.783 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:42.783 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.783 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.783 rmmod nvme_tcp 00:21:42.783 rmmod nvme_fabrics 00:21:42.783 rmmod nvme_keyring 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1412053 ']' 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1412053 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1412053 ']' 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1412053 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1412053 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1412053' 00:21:43.040 killing process with pid 1412053 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1412053 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1412053 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:43.040 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:43.298 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.298 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.298 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.298 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.298 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.298 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.298 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:46.587 00:21:46.587 real 0m49.497s 00:21:46.587 user 2m43.753s 00:21:46.587 sys 0m10.095s 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.587 ************************************ 00:21:46.587 END TEST nvmf_perf_adq 00:21:46.587 ************************************ 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.587 ************************************ 00:21:46.587 START TEST nvmf_shutdown 00:21:46.587 ************************************ 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:46.587 * Looking for test storage... 00:21:46.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:46.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.587 --rc genhtml_branch_coverage=1 00:21:46.587 --rc genhtml_function_coverage=1 00:21:46.587 --rc genhtml_legend=1 00:21:46.587 --rc geninfo_all_blocks=1 00:21:46.587 --rc geninfo_unexecuted_blocks=1 00:21:46.587 00:21:46.587 ' 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:46.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.587 --rc genhtml_branch_coverage=1 00:21:46.587 --rc genhtml_function_coverage=1 00:21:46.587 --rc genhtml_legend=1 00:21:46.587 --rc geninfo_all_blocks=1 00:21:46.587 --rc geninfo_unexecuted_blocks=1 00:21:46.587 00:21:46.587 ' 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:46.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.587 --rc genhtml_branch_coverage=1 00:21:46.587 --rc genhtml_function_coverage=1 00:21:46.587 --rc genhtml_legend=1 00:21:46.587 --rc geninfo_all_blocks=1 00:21:46.587 --rc geninfo_unexecuted_blocks=1 00:21:46.587 00:21:46.587 ' 00:21:46.587 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:46.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.587 --rc genhtml_branch_coverage=1 00:21:46.588 --rc genhtml_function_coverage=1 00:21:46.588 --rc genhtml_legend=1 00:21:46.588 --rc geninfo_all_blocks=1 00:21:46.588 --rc geninfo_unexecuted_blocks=1 00:21:46.588 00:21:46.588 ' 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:46.588 ************************************ 00:21:46.588 START TEST nvmf_shutdown_tc1 00:21:46.588 ************************************ 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.588 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:51.855 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:51.855 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.855 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:51.856 Found net devices under 0000:86:00.0: cvl_0_0 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:51.856 Found net devices under 0000:86:00.1: cvl_0_1 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.856 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.856 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.856 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.856 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:51.856 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.856 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.856 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.856 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:51.856 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:51.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:21:51.856 00:21:51.856 --- 10.0.0.2 ping statistics --- 00:21:51.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.856 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:21:51.856 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:21:52.115 00:21:52.115 --- 10.0.0.1 ping statistics --- 00:21:52.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.115 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1417527 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1417527 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1417527 ']' 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.115 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.115 [2024-11-28 08:20:34.230675] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:21:52.115 [2024-11-28 08:20:34.230721] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.115 [2024-11-28 08:20:34.296771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.115 [2024-11-28 08:20:34.342147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.115 [2024-11-28 08:20:34.342181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.115 [2024-11-28 08:20:34.342189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.115 [2024-11-28 08:20:34.342196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.115 [2024-11-28 08:20:34.342201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.115 [2024-11-28 08:20:34.343857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.115 [2024-11-28 08:20:34.343941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.116 [2024-11-28 08:20:34.344051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.116 [2024-11-28 08:20:34.344051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.374 [2024-11-28 08:20:34.481536] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.374 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.375 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.375 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.375 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:52.375 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:52.375 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:52.375 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.375 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.375 Malloc1 00:21:52.375 [2024-11-28 08:20:34.584775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.375 Malloc2 00:21:52.633 Malloc3 00:21:52.633 Malloc4 00:21:52.633 Malloc5 00:21:52.633 Malloc6 00:21:52.633 Malloc7 00:21:52.633 Malloc8 00:21:52.892 Malloc9 00:21:52.892 Malloc10 00:21:52.892 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.892 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:52.892 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.892 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1417642 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1417642 /var/tmp/bdevperf.sock 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1417642 ']' 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.892 { 00:21:52.892 "params": { 00:21:52.892 "name": "Nvme$subsystem", 00:21:52.892 "trtype": "$TEST_TRANSPORT", 00:21:52.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.892 "adrfam": "ipv4", 00:21:52.892 "trsvcid": "$NVMF_PORT", 00:21:52.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.892 "hdgst": ${hdgst:-false}, 00:21:52.892 "ddgst": ${ddgst:-false} 00:21:52.892 }, 00:21:52.892 "method": "bdev_nvme_attach_controller" 00:21:52.892 } 00:21:52.892 EOF 00:21:52.892 )") 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.892 { 00:21:52.892 "params": { 00:21:52.892 "name": "Nvme$subsystem", 00:21:52.892 "trtype": "$TEST_TRANSPORT", 00:21:52.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.892 "adrfam": "ipv4", 00:21:52.892 "trsvcid": "$NVMF_PORT", 00:21:52.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.892 "hdgst": ${hdgst:-false}, 00:21:52.892 "ddgst": ${ddgst:-false} 00:21:52.892 }, 00:21:52.892 "method": "bdev_nvme_attach_controller" 00:21:52.892 } 00:21:52.892 EOF 00:21:52.892 )") 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.892 { 00:21:52.892 "params": { 00:21:52.892 "name": "Nvme$subsystem", 00:21:52.892 "trtype": "$TEST_TRANSPORT", 00:21:52.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.892 "adrfam": "ipv4", 00:21:52.892 "trsvcid": "$NVMF_PORT", 00:21:52.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.892 "hdgst": ${hdgst:-false}, 00:21:52.892 "ddgst": ${ddgst:-false} 00:21:52.892 }, 00:21:52.892 "method": "bdev_nvme_attach_controller" 00:21:52.892 } 00:21:52.892 EOF 00:21:52.892 )") 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.892 { 00:21:52.892 "params": { 00:21:52.892 "name": "Nvme$subsystem", 00:21:52.892 "trtype": "$TEST_TRANSPORT", 00:21:52.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.892 "adrfam": "ipv4", 00:21:52.892 "trsvcid": "$NVMF_PORT", 00:21:52.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.892 "hdgst": ${hdgst:-false}, 00:21:52.892 "ddgst": ${ddgst:-false} 00:21:52.892 }, 00:21:52.892 "method": "bdev_nvme_attach_controller" 00:21:52.892 } 00:21:52.892 EOF 00:21:52.892 )") 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.892 { 00:21:52.892 "params": { 00:21:52.892 "name": "Nvme$subsystem", 00:21:52.892 "trtype": "$TEST_TRANSPORT", 00:21:52.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.892 "adrfam": "ipv4", 00:21:52.892 "trsvcid": "$NVMF_PORT", 00:21:52.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.892 "hdgst": ${hdgst:-false}, 00:21:52.892 "ddgst": ${ddgst:-false} 00:21:52.892 }, 00:21:52.892 "method": "bdev_nvme_attach_controller" 00:21:52.892 } 00:21:52.892 EOF 00:21:52.892 )") 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.892 { 00:21:52.892 "params": { 00:21:52.892 "name": "Nvme$subsystem", 00:21:52.892 "trtype": "$TEST_TRANSPORT", 00:21:52.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.892 "adrfam": "ipv4", 00:21:52.892 "trsvcid": "$NVMF_PORT", 00:21:52.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.892 "hdgst": ${hdgst:-false}, 00:21:52.892 "ddgst": ${ddgst:-false} 00:21:52.892 }, 00:21:52.892 "method": "bdev_nvme_attach_controller" 00:21:52.892 } 00:21:52.892 EOF 00:21:52.892 )") 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.892 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.893 [2024-11-28 08:20:35.058548] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:21:52.893 [2024-11-28 08:20:35.058594] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.893 { 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme$subsystem", 00:21:52.893 "trtype": "$TEST_TRANSPORT", 00:21:52.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "$NVMF_PORT", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.893 "hdgst": ${hdgst:-false}, 00:21:52.893 "ddgst": ${ddgst:-false} 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 } 00:21:52.893 EOF 00:21:52.893 )") 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.893 { 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme$subsystem", 00:21:52.893 "trtype": "$TEST_TRANSPORT", 00:21:52.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "$NVMF_PORT", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.893 "hdgst": ${hdgst:-false}, 00:21:52.893 "ddgst": ${ddgst:-false} 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 } 00:21:52.893 EOF 00:21:52.893 )") 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.893 { 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme$subsystem", 00:21:52.893 "trtype": "$TEST_TRANSPORT", 00:21:52.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "$NVMF_PORT", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.893 "hdgst": ${hdgst:-false}, 00:21:52.893 "ddgst": ${ddgst:-false} 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 } 00:21:52.893 EOF 00:21:52.893 )") 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.893 { 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme$subsystem", 00:21:52.893 "trtype": "$TEST_TRANSPORT", 00:21:52.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "$NVMF_PORT", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.893 "hdgst": ${hdgst:-false}, 00:21:52.893 "ddgst": ${ddgst:-false} 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 } 00:21:52.893 EOF 00:21:52.893 )") 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:52.893 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme1", 00:21:52.893 "trtype": "tcp", 00:21:52.893 "traddr": "10.0.0.2", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "4420", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.893 "hdgst": false, 00:21:52.893 "ddgst": false 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 },{ 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme2", 00:21:52.893 "trtype": "tcp", 00:21:52.893 "traddr": "10.0.0.2", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "4420", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:52.893 "hdgst": false, 00:21:52.893 "ddgst": false 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 },{ 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme3", 00:21:52.893 "trtype": "tcp", 00:21:52.893 "traddr": "10.0.0.2", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "4420", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:52.893 "hdgst": false, 00:21:52.893 "ddgst": false 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 },{ 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme4", 00:21:52.893 "trtype": "tcp", 00:21:52.893 "traddr": "10.0.0.2", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "4420", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:52.893 "hdgst": false, 00:21:52.893 "ddgst": false 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 },{ 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme5", 00:21:52.893 "trtype": "tcp", 00:21:52.893 "traddr": "10.0.0.2", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "4420", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:52.893 "hdgst": false, 00:21:52.893 "ddgst": false 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 },{ 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme6", 00:21:52.893 "trtype": "tcp", 00:21:52.893 "traddr": "10.0.0.2", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "4420", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:52.893 "hdgst": false, 00:21:52.893 "ddgst": false 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 },{ 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme7", 00:21:52.893 "trtype": "tcp", 00:21:52.893 "traddr": "10.0.0.2", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "4420", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:52.893 "hdgst": false, 00:21:52.893 "ddgst": false 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 },{ 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme8", 00:21:52.893 "trtype": "tcp", 00:21:52.893 "traddr": "10.0.0.2", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "4420", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:52.893 "hdgst": false, 00:21:52.893 "ddgst": false 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 },{ 00:21:52.893 "params": { 00:21:52.893 "name": "Nvme9", 00:21:52.893 "trtype": "tcp", 00:21:52.893 "traddr": "10.0.0.2", 00:21:52.893 "adrfam": "ipv4", 00:21:52.893 "trsvcid": "4420", 00:21:52.893 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:52.893 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:52.893 "hdgst": false, 00:21:52.893 "ddgst": false 00:21:52.893 }, 00:21:52.893 "method": "bdev_nvme_attach_controller" 00:21:52.893 },{ 00:21:52.894 "params": { 00:21:52.894 "name": "Nvme10", 00:21:52.894 "trtype": "tcp", 00:21:52.894 "traddr": "10.0.0.2", 00:21:52.894 "adrfam": "ipv4", 00:21:52.894 "trsvcid": "4420", 00:21:52.894 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:52.894 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:52.894 "hdgst": false, 00:21:52.894 "ddgst": false 00:21:52.894 }, 00:21:52.894 "method": "bdev_nvme_attach_controller" 00:21:52.894 }' 00:21:52.894 [2024-11-28 08:20:35.124132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.152 [2024-11-28 08:20:35.165902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.053 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.053 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:55.053 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:55.053 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.053 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:55.053 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.053 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1417642 00:21:55.053 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:55.053 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:55.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1417642 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:55.987 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1417527 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.988 { 00:21:55.988 "params": { 00:21:55.988 "name": "Nvme$subsystem", 00:21:55.988 "trtype": "$TEST_TRANSPORT", 00:21:55.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.988 "adrfam": "ipv4", 00:21:55.988 "trsvcid": "$NVMF_PORT", 00:21:55.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.988 "hdgst": ${hdgst:-false}, 00:21:55.988 "ddgst": ${ddgst:-false} 00:21:55.988 }, 00:21:55.988 "method": "bdev_nvme_attach_controller" 00:21:55.988 } 00:21:55.988 EOF 00:21:55.988 )") 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.988 { 00:21:55.988 "params": { 00:21:55.988 "name": "Nvme$subsystem", 00:21:55.988 "trtype": "$TEST_TRANSPORT", 00:21:55.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.988 "adrfam": "ipv4", 00:21:55.988 "trsvcid": "$NVMF_PORT", 00:21:55.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.988 "hdgst": ${hdgst:-false}, 00:21:55.988 "ddgst": ${ddgst:-false} 00:21:55.988 }, 00:21:55.988 "method": "bdev_nvme_attach_controller" 00:21:55.988 } 00:21:55.988 EOF 00:21:55.988 )") 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.988 { 00:21:55.988 "params": { 00:21:55.988 "name": "Nvme$subsystem", 00:21:55.988 "trtype": "$TEST_TRANSPORT", 00:21:55.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.988 "adrfam": "ipv4", 00:21:55.988 "trsvcid": "$NVMF_PORT", 00:21:55.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.988 "hdgst": ${hdgst:-false}, 00:21:55.988 "ddgst": ${ddgst:-false} 00:21:55.988 }, 00:21:55.988 "method": "bdev_nvme_attach_controller" 00:21:55.988 } 00:21:55.988 EOF 00:21:55.988 )") 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.988 { 00:21:55.988 "params": { 00:21:55.988 "name": "Nvme$subsystem", 00:21:55.988 "trtype": "$TEST_TRANSPORT", 00:21:55.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.988 "adrfam": "ipv4", 00:21:55.988 "trsvcid": "$NVMF_PORT", 00:21:55.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.988 "hdgst": ${hdgst:-false}, 00:21:55.988 "ddgst": ${ddgst:-false} 00:21:55.988 }, 00:21:55.988 "method": "bdev_nvme_attach_controller" 00:21:55.988 } 00:21:55.988 EOF 00:21:55.988 )") 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.988 { 00:21:55.988 "params": { 00:21:55.988 "name": "Nvme$subsystem", 00:21:55.988 "trtype": "$TEST_TRANSPORT", 00:21:55.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.988 "adrfam": "ipv4", 00:21:55.988 "trsvcid": "$NVMF_PORT", 00:21:55.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.988 "hdgst": ${hdgst:-false}, 00:21:55.988 "ddgst": ${ddgst:-false} 00:21:55.988 }, 00:21:55.988 "method": "bdev_nvme_attach_controller" 00:21:55.988 } 00:21:55.988 EOF 00:21:55.988 )") 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.988 { 00:21:55.988 "params": { 00:21:55.988 "name": "Nvme$subsystem", 00:21:55.988 "trtype": "$TEST_TRANSPORT", 00:21:55.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.988 "adrfam": "ipv4", 00:21:55.988 "trsvcid": "$NVMF_PORT", 00:21:55.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.988 "hdgst": ${hdgst:-false}, 00:21:55.988 "ddgst": ${ddgst:-false} 00:21:55.988 }, 00:21:55.988 "method": "bdev_nvme_attach_controller" 00:21:55.988 } 00:21:55.988 EOF 00:21:55.988 )") 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.988 { 00:21:55.988 "params": { 00:21:55.988 "name": "Nvme$subsystem", 00:21:55.988 "trtype": "$TEST_TRANSPORT", 00:21:55.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.988 "adrfam": "ipv4", 00:21:55.988 "trsvcid": "$NVMF_PORT", 00:21:55.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.988 "hdgst": ${hdgst:-false}, 00:21:55.988 "ddgst": ${ddgst:-false} 00:21:55.988 }, 00:21:55.988 "method": "bdev_nvme_attach_controller" 00:21:55.988 } 00:21:55.988 EOF 00:21:55.988 )") 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.988 [2024-11-28 08:20:37.995088] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:21:55.988 [2024-11-28 08:20:37.995136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418134 ] 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.988 { 00:21:55.988 "params": { 00:21:55.988 "name": "Nvme$subsystem", 00:21:55.988 "trtype": "$TEST_TRANSPORT", 00:21:55.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.988 "adrfam": "ipv4", 00:21:55.988 "trsvcid": "$NVMF_PORT", 00:21:55.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.988 "hdgst": ${hdgst:-false}, 00:21:55.988 "ddgst": ${ddgst:-false} 00:21:55.988 }, 00:21:55.988 "method": "bdev_nvme_attach_controller" 00:21:55.988 } 00:21:55.988 EOF 00:21:55.988 )") 00:21:55.988 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.988 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.988 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.988 { 00:21:55.988 "params": { 00:21:55.988 "name": "Nvme$subsystem", 00:21:55.988 "trtype": "$TEST_TRANSPORT", 00:21:55.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.988 "adrfam": "ipv4", 00:21:55.988 "trsvcid": "$NVMF_PORT", 00:21:55.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.988 "hdgst": ${hdgst:-false}, 00:21:55.988 "ddgst": ${ddgst:-false} 00:21:55.988 }, 00:21:55.988 "method": "bdev_nvme_attach_controller" 00:21:55.988 } 00:21:55.988 EOF 00:21:55.988 )") 00:21:55.988 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.988 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.988 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.988 { 00:21:55.988 "params": { 00:21:55.988 "name": "Nvme$subsystem", 00:21:55.988 "trtype": "$TEST_TRANSPORT", 00:21:55.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.988 "adrfam": "ipv4", 00:21:55.988 "trsvcid": "$NVMF_PORT", 00:21:55.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.988 "hdgst": ${hdgst:-false}, 00:21:55.988 "ddgst": ${ddgst:-false} 00:21:55.988 }, 00:21:55.988 "method": "bdev_nvme_attach_controller" 00:21:55.988 } 00:21:55.988 EOF 00:21:55.988 )") 00:21:55.988 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:55.989 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:55.989 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:55.989 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:55.989 "params": { 00:21:55.989 "name": "Nvme1", 00:21:55.989 "trtype": "tcp", 00:21:55.989 "traddr": "10.0.0.2", 00:21:55.989 "adrfam": "ipv4", 00:21:55.989 "trsvcid": "4420", 00:21:55.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:55.989 "hdgst": false, 00:21:55.989 "ddgst": false 00:21:55.989 }, 00:21:55.989 "method": "bdev_nvme_attach_controller" 00:21:55.989 },{ 00:21:55.989 "params": { 00:21:55.989 "name": "Nvme2", 00:21:55.989 "trtype": "tcp", 00:21:55.989 "traddr": "10.0.0.2", 00:21:55.989 "adrfam": "ipv4", 00:21:55.989 "trsvcid": "4420", 00:21:55.989 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:55.989 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:55.989 "hdgst": false, 00:21:55.989 "ddgst": false 00:21:55.989 }, 00:21:55.989 "method": "bdev_nvme_attach_controller" 00:21:55.989 },{ 00:21:55.989 "params": { 00:21:55.989 "name": "Nvme3", 00:21:55.989 "trtype": "tcp", 00:21:55.989 "traddr": "10.0.0.2", 00:21:55.989 "adrfam": "ipv4", 00:21:55.989 "trsvcid": "4420", 00:21:55.989 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:55.989 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:55.989 "hdgst": false, 00:21:55.989 "ddgst": false 00:21:55.989 }, 00:21:55.989 "method": "bdev_nvme_attach_controller" 00:21:55.989 },{ 00:21:55.989 "params": { 00:21:55.989 "name": "Nvme4", 00:21:55.989 "trtype": "tcp", 00:21:55.989 "traddr": "10.0.0.2", 00:21:55.989 "adrfam": "ipv4", 00:21:55.989 "trsvcid": "4420", 00:21:55.989 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:55.989 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:55.989 "hdgst": false, 00:21:55.989 "ddgst": false 00:21:55.989 }, 00:21:55.989 "method": "bdev_nvme_attach_controller" 00:21:55.989 },{ 00:21:55.989 "params": { 00:21:55.989 "name": "Nvme5", 00:21:55.989 "trtype": "tcp", 00:21:55.989 "traddr": "10.0.0.2", 00:21:55.989 "adrfam": "ipv4", 00:21:55.989 "trsvcid": "4420", 00:21:55.989 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:55.989 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:55.989 "hdgst": false, 00:21:55.989 "ddgst": false 00:21:55.989 }, 00:21:55.989 "method": "bdev_nvme_attach_controller" 00:21:55.989 },{ 00:21:55.989 "params": { 00:21:55.989 "name": "Nvme6", 00:21:55.989 "trtype": "tcp", 00:21:55.989 "traddr": "10.0.0.2", 00:21:55.989 "adrfam": "ipv4", 00:21:55.989 "trsvcid": "4420", 00:21:55.989 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:55.989 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:55.989 "hdgst": false, 00:21:55.989 "ddgst": false 00:21:55.989 }, 00:21:55.989 "method": "bdev_nvme_attach_controller" 00:21:55.989 },{ 00:21:55.989 "params": { 00:21:55.989 "name": "Nvme7", 00:21:55.989 "trtype": "tcp", 00:21:55.989 "traddr": "10.0.0.2", 00:21:55.989 "adrfam": "ipv4", 00:21:55.989 "trsvcid": "4420", 00:21:55.989 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:55.989 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:55.989 "hdgst": false, 00:21:55.989 "ddgst": false 00:21:55.989 }, 00:21:55.989 "method": "bdev_nvme_attach_controller" 00:21:55.989 },{ 00:21:55.989 "params": { 00:21:55.989 "name": "Nvme8", 00:21:55.989 "trtype": "tcp", 00:21:55.989 "traddr": "10.0.0.2", 00:21:55.989 "adrfam": "ipv4", 00:21:55.989 "trsvcid": "4420", 00:21:55.989 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:55.989 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:55.989 "hdgst": false, 00:21:55.989 "ddgst": false 00:21:55.989 }, 00:21:55.989 "method": "bdev_nvme_attach_controller" 00:21:55.989 },{ 00:21:55.989 "params": { 00:21:55.989 "name": "Nvme9", 00:21:55.989 "trtype": "tcp", 00:21:55.989 "traddr": "10.0.0.2", 00:21:55.989 "adrfam": "ipv4", 00:21:55.989 "trsvcid": "4420", 00:21:55.989 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:55.989 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:55.989 "hdgst": false, 00:21:55.989 "ddgst": false 00:21:55.989 }, 00:21:55.989 "method": "bdev_nvme_attach_controller" 00:21:55.989 },{ 00:21:55.989 "params": { 00:21:55.989 "name": "Nvme10", 00:21:55.989 "trtype": "tcp", 00:21:55.989 "traddr": "10.0.0.2", 00:21:55.989 "adrfam": "ipv4", 00:21:55.989 "trsvcid": "4420", 00:21:55.989 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:55.989 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:55.989 "hdgst": false, 00:21:55.989 "ddgst": false 00:21:55.989 }, 00:21:55.989 "method": "bdev_nvme_attach_controller" 00:21:55.989 }' 00:21:55.989 [2024-11-28 08:20:38.060042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.989 [2024-11-28 08:20:38.101482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.364 Running I/O for 1 seconds... 00:21:58.558 2189.00 IOPS, 136.81 MiB/s 00:21:58.558 Latency(us) 00:21:58.558 [2024-11-28T07:20:40.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.558 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.558 Verification LBA range: start 0x0 length 0x400 00:21:58.558 Nvme1n1 : 1.14 287.31 17.96 0.00 0.00 218063.05 12252.38 210627.01 00:21:58.558 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.558 Verification LBA range: start 0x0 length 0x400 00:21:58.558 Nvme2n1 : 1.05 244.92 15.31 0.00 0.00 254875.38 15272.74 206067.98 00:21:58.558 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.558 Verification LBA range: start 0x0 length 0x400 00:21:58.558 Nvme3n1 : 1.04 246.44 15.40 0.00 0.00 248928.17 16526.47 226127.69 00:21:58.558 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.558 Verification LBA range: start 0x0 length 0x400 00:21:58.558 Nvme4n1 : 1.13 286.66 17.92 0.00 0.00 207390.45 14132.98 220656.86 00:21:58.558 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.558 Verification LBA range: start 0x0 length 0x400 00:21:58.558 Nvme5n1 : 1.16 275.32 17.21 0.00 0.00 217700.80 17096.35 225215.89 00:21:58.558 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.558 Verification LBA range: start 0x0 length 0x400 00:21:58.558 Nvme6n1 : 1.20 266.60 16.66 0.00 0.00 214362.11 14360.93 217009.64 00:21:58.558 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.558 Verification LBA range: start 0x0 length 0x400 00:21:58.558 Nvme7n1 : 1.14 279.65 17.48 0.00 0.00 207684.56 14531.90 232510.33 00:21:58.558 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.558 Verification LBA range: start 0x0 length 0x400 00:21:58.558 Nvme8n1 : 1.15 277.90 17.37 0.00 0.00 206019.54 15044.79 224304.08 00:21:58.558 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.558 Verification LBA range: start 0x0 length 0x400 00:21:58.558 Nvme9n1 : 1.16 275.81 17.24 0.00 0.00 204414.80 22681.15 229774.91 00:21:58.558 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.558 Verification LBA range: start 0x0 length 0x400 00:21:58.558 Nvme10n1 : 1.17 274.08 17.13 0.00 0.00 202817.80 14246.96 244363.80 00:21:58.558 [2024-11-28T07:20:40.827Z] =================================================================================================================== 00:21:58.558 [2024-11-28T07:20:40.827Z] Total : 2714.69 169.67 0.00 0.00 216810.42 12252.38 244363.80 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.817 rmmod nvme_tcp 00:21:58.817 rmmod nvme_fabrics 00:21:58.817 rmmod nvme_keyring 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1417527 ']' 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1417527 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1417527 ']' 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1417527 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.817 08:20:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1417527 00:21:58.817 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:58.817 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:58.817 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1417527' 00:21:58.817 killing process with pid 1417527 00:21:58.817 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1417527 00:21:58.817 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1417527 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.383 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.286 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.286 00:22:01.286 real 0m14.788s 00:22:01.286 user 0m34.059s 00:22:01.286 sys 0m5.402s 00:22:01.286 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.286 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.286 ************************************ 00:22:01.286 END TEST nvmf_shutdown_tc1 00:22:01.286 ************************************ 00:22:01.286 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:01.286 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:01.286 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.286 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:01.546 ************************************ 00:22:01.546 START TEST nvmf_shutdown_tc2 00:22:01.546 ************************************ 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:01.546 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:01.546 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:01.546 Found net devices under 0000:86:00.0: cvl_0_0 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:01.546 Found net devices under 0000:86:00.1: cvl_0_1 00:22:01.546 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.547 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:22:01.806 00:22:01.806 --- 10.0.0.2 ping statistics --- 00:22:01.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.806 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:01.806 00:22:01.806 --- 10.0.0.1 ping statistics --- 00:22:01.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.806 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1419203 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1419203 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1419203 ']' 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.806 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.807 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.807 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.807 08:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.807 [2024-11-28 08:20:43.910574] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:22:01.807 [2024-11-28 08:20:43.910620] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.807 [2024-11-28 08:20:43.975517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.807 [2024-11-28 08:20:44.018103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.807 [2024-11-28 08:20:44.018141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.807 [2024-11-28 08:20:44.018149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.807 [2024-11-28 08:20:44.018155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.807 [2024-11-28 08:20:44.018161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.807 [2024-11-28 08:20:44.019809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.807 [2024-11-28 08:20:44.019901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.807 [2024-11-28 08:20:44.020009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.807 [2024-11-28 08:20:44.020009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:02.065 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.065 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:02.065 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.065 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.065 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.065 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.065 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.065 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.065 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.065 [2024-11-28 08:20:44.157787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.065 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.065 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.066 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.066 Malloc1 00:22:02.066 [2024-11-28 08:20:44.262735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.066 Malloc2 00:22:02.066 Malloc3 00:22:02.324 Malloc4 00:22:02.324 Malloc5 00:22:02.324 Malloc6 00:22:02.324 Malloc7 00:22:02.324 Malloc8 00:22:02.324 Malloc9 00:22:02.583 Malloc10 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1419427 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1419427 /var/tmp/bdevperf.sock 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1419427 ']' 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.583 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.584 { 00:22:02.584 "params": { 00:22:02.584 "name": "Nvme$subsystem", 00:22:02.584 "trtype": "$TEST_TRANSPORT", 00:22:02.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.584 "adrfam": "ipv4", 00:22:02.584 "trsvcid": "$NVMF_PORT", 00:22:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.584 "hdgst": ${hdgst:-false}, 00:22:02.584 "ddgst": ${ddgst:-false} 00:22:02.584 }, 00:22:02.584 "method": "bdev_nvme_attach_controller" 00:22:02.584 } 00:22:02.584 EOF 00:22:02.584 )") 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.584 { 00:22:02.584 "params": { 00:22:02.584 "name": "Nvme$subsystem", 00:22:02.584 "trtype": "$TEST_TRANSPORT", 00:22:02.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.584 "adrfam": "ipv4", 00:22:02.584 "trsvcid": "$NVMF_PORT", 00:22:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.584 "hdgst": ${hdgst:-false}, 00:22:02.584 "ddgst": ${ddgst:-false} 00:22:02.584 }, 00:22:02.584 "method": "bdev_nvme_attach_controller" 00:22:02.584 } 00:22:02.584 EOF 00:22:02.584 )") 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.584 { 00:22:02.584 "params": { 00:22:02.584 "name": "Nvme$subsystem", 00:22:02.584 "trtype": "$TEST_TRANSPORT", 00:22:02.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.584 "adrfam": "ipv4", 00:22:02.584 "trsvcid": "$NVMF_PORT", 00:22:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.584 "hdgst": ${hdgst:-false}, 00:22:02.584 "ddgst": ${ddgst:-false} 00:22:02.584 }, 00:22:02.584 "method": "bdev_nvme_attach_controller" 00:22:02.584 } 00:22:02.584 EOF 00:22:02.584 )") 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.584 { 00:22:02.584 "params": { 00:22:02.584 "name": "Nvme$subsystem", 00:22:02.584 "trtype": "$TEST_TRANSPORT", 00:22:02.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.584 "adrfam": "ipv4", 00:22:02.584 "trsvcid": "$NVMF_PORT", 00:22:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.584 "hdgst": ${hdgst:-false}, 00:22:02.584 "ddgst": ${ddgst:-false} 00:22:02.584 }, 00:22:02.584 "method": "bdev_nvme_attach_controller" 00:22:02.584 } 00:22:02.584 EOF 00:22:02.584 )") 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.584 { 00:22:02.584 "params": { 00:22:02.584 "name": "Nvme$subsystem", 00:22:02.584 "trtype": "$TEST_TRANSPORT", 00:22:02.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.584 "adrfam": "ipv4", 00:22:02.584 "trsvcid": "$NVMF_PORT", 00:22:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.584 "hdgst": ${hdgst:-false}, 00:22:02.584 "ddgst": ${ddgst:-false} 00:22:02.584 }, 00:22:02.584 "method": "bdev_nvme_attach_controller" 00:22:02.584 } 00:22:02.584 EOF 00:22:02.584 )") 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.584 { 00:22:02.584 "params": { 00:22:02.584 "name": "Nvme$subsystem", 00:22:02.584 "trtype": "$TEST_TRANSPORT", 00:22:02.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.584 "adrfam": "ipv4", 00:22:02.584 "trsvcid": "$NVMF_PORT", 00:22:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.584 "hdgst": ${hdgst:-false}, 00:22:02.584 "ddgst": ${ddgst:-false} 00:22:02.584 }, 00:22:02.584 "method": "bdev_nvme_attach_controller" 00:22:02.584 } 00:22:02.584 EOF 00:22:02.584 )") 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.584 { 00:22:02.584 "params": { 00:22:02.584 "name": "Nvme$subsystem", 00:22:02.584 "trtype": "$TEST_TRANSPORT", 00:22:02.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.584 "adrfam": "ipv4", 00:22:02.584 "trsvcid": "$NVMF_PORT", 00:22:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.584 "hdgst": ${hdgst:-false}, 00:22:02.584 "ddgst": ${ddgst:-false} 00:22:02.584 }, 00:22:02.584 "method": "bdev_nvme_attach_controller" 00:22:02.584 } 00:22:02.584 EOF 00:22:02.584 )") 00:22:02.584 [2024-11-28 08:20:44.739016] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:22:02.584 [2024-11-28 08:20:44.739065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419427 ] 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.584 { 00:22:02.584 "params": { 00:22:02.584 "name": "Nvme$subsystem", 00:22:02.584 "trtype": "$TEST_TRANSPORT", 00:22:02.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.584 "adrfam": "ipv4", 00:22:02.584 "trsvcid": "$NVMF_PORT", 00:22:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.584 "hdgst": ${hdgst:-false}, 00:22:02.584 "ddgst": ${ddgst:-false} 00:22:02.584 }, 00:22:02.584 "method": "bdev_nvme_attach_controller" 00:22:02.584 } 00:22:02.584 EOF 00:22:02.584 )") 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.584 { 00:22:02.584 "params": { 00:22:02.584 "name": "Nvme$subsystem", 00:22:02.584 "trtype": "$TEST_TRANSPORT", 00:22:02.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.584 "adrfam": "ipv4", 00:22:02.584 "trsvcid": "$NVMF_PORT", 00:22:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.584 "hdgst": ${hdgst:-false}, 00:22:02.584 "ddgst": ${ddgst:-false} 00:22:02.584 }, 00:22:02.584 "method": "bdev_nvme_attach_controller" 00:22:02.584 } 00:22:02.584 EOF 00:22:02.584 )") 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.584 { 00:22:02.584 "params": { 00:22:02.584 "name": "Nvme$subsystem", 00:22:02.584 "trtype": "$TEST_TRANSPORT", 00:22:02.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.584 "adrfam": "ipv4", 00:22:02.584 "trsvcid": "$NVMF_PORT", 00:22:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.584 "hdgst": ${hdgst:-false}, 00:22:02.584 "ddgst": ${ddgst:-false} 00:22:02.584 }, 00:22:02.584 "method": "bdev_nvme_attach_controller" 00:22:02.584 } 00:22:02.584 EOF 00:22:02.584 )") 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:02.584 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:02.584 "params": { 00:22:02.584 "name": "Nvme1", 00:22:02.584 "trtype": "tcp", 00:22:02.584 "traddr": "10.0.0.2", 00:22:02.584 "adrfam": "ipv4", 00:22:02.584 "trsvcid": "4420", 00:22:02.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.584 "hdgst": false, 00:22:02.584 "ddgst": false 00:22:02.584 }, 00:22:02.584 "method": "bdev_nvme_attach_controller" 00:22:02.585 },{ 00:22:02.585 "params": { 00:22:02.585 "name": "Nvme2", 00:22:02.585 "trtype": "tcp", 00:22:02.585 "traddr": "10.0.0.2", 00:22:02.585 "adrfam": "ipv4", 00:22:02.585 "trsvcid": "4420", 00:22:02.585 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:02.585 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:02.585 "hdgst": false, 00:22:02.585 "ddgst": false 00:22:02.585 }, 00:22:02.585 "method": "bdev_nvme_attach_controller" 00:22:02.585 },{ 00:22:02.585 "params": { 00:22:02.585 "name": "Nvme3", 00:22:02.585 "trtype": "tcp", 00:22:02.585 "traddr": "10.0.0.2", 00:22:02.585 "adrfam": "ipv4", 00:22:02.585 "trsvcid": "4420", 00:22:02.585 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:02.585 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:02.585 "hdgst": false, 00:22:02.585 "ddgst": false 00:22:02.585 }, 00:22:02.585 "method": "bdev_nvme_attach_controller" 00:22:02.585 },{ 00:22:02.585 "params": { 00:22:02.585 "name": "Nvme4", 00:22:02.585 "trtype": "tcp", 00:22:02.585 "traddr": "10.0.0.2", 00:22:02.585 "adrfam": "ipv4", 00:22:02.585 "trsvcid": "4420", 00:22:02.585 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:02.585 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:02.585 "hdgst": false, 00:22:02.585 "ddgst": false 00:22:02.585 }, 00:22:02.585 "method": "bdev_nvme_attach_controller" 00:22:02.585 },{ 00:22:02.585 "params": { 00:22:02.585 "name": "Nvme5", 00:22:02.585 "trtype": "tcp", 00:22:02.585 "traddr": "10.0.0.2", 00:22:02.585 "adrfam": "ipv4", 00:22:02.585 "trsvcid": "4420", 00:22:02.585 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:02.585 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:02.585 "hdgst": false, 00:22:02.585 "ddgst": false 00:22:02.585 }, 00:22:02.585 "method": "bdev_nvme_attach_controller" 00:22:02.585 },{ 00:22:02.585 "params": { 00:22:02.585 "name": "Nvme6", 00:22:02.585 "trtype": "tcp", 00:22:02.585 "traddr": "10.0.0.2", 00:22:02.585 "adrfam": "ipv4", 00:22:02.585 "trsvcid": "4420", 00:22:02.585 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:02.585 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:02.585 "hdgst": false, 00:22:02.585 "ddgst": false 00:22:02.585 }, 00:22:02.585 "method": "bdev_nvme_attach_controller" 00:22:02.585 },{ 00:22:02.585 "params": { 00:22:02.585 "name": "Nvme7", 00:22:02.585 "trtype": "tcp", 00:22:02.585 "traddr": "10.0.0.2", 00:22:02.585 "adrfam": "ipv4", 00:22:02.585 "trsvcid": "4420", 00:22:02.585 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:02.585 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:02.585 "hdgst": false, 00:22:02.585 "ddgst": false 00:22:02.585 }, 00:22:02.585 "method": "bdev_nvme_attach_controller" 00:22:02.585 },{ 00:22:02.585 "params": { 00:22:02.585 "name": "Nvme8", 00:22:02.585 "trtype": "tcp", 00:22:02.585 "traddr": "10.0.0.2", 00:22:02.585 "adrfam": "ipv4", 00:22:02.585 "trsvcid": "4420", 00:22:02.585 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:02.585 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:02.585 "hdgst": false, 00:22:02.585 "ddgst": false 00:22:02.585 }, 00:22:02.585 "method": "bdev_nvme_attach_controller" 00:22:02.585 },{ 00:22:02.585 "params": { 00:22:02.585 "name": "Nvme9", 00:22:02.585 "trtype": "tcp", 00:22:02.585 "traddr": "10.0.0.2", 00:22:02.585 "adrfam": "ipv4", 00:22:02.585 "trsvcid": "4420", 00:22:02.585 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:02.585 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:02.585 "hdgst": false, 00:22:02.585 "ddgst": false 00:22:02.585 }, 00:22:02.585 "method": "bdev_nvme_attach_controller" 00:22:02.585 },{ 00:22:02.585 "params": { 00:22:02.585 "name": "Nvme10", 00:22:02.585 "trtype": "tcp", 00:22:02.585 "traddr": "10.0.0.2", 00:22:02.585 "adrfam": "ipv4", 00:22:02.585 "trsvcid": "4420", 00:22:02.585 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:02.585 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:02.585 "hdgst": false, 00:22:02.585 "ddgst": false 00:22:02.585 }, 00:22:02.585 "method": "bdev_nvme_attach_controller" 00:22:02.585 }' 00:22:02.585 [2024-11-28 08:20:44.803789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.585 [2024-11-28 08:20:44.844976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.487 Running I/O for 10 seconds... 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:04.487 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1419427 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1419427 ']' 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1419427 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.747 08:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1419427 00:22:05.005 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.005 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.005 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1419427' 00:22:05.005 killing process with pid 1419427 00:22:05.006 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1419427 00:22:05.006 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1419427 00:22:05.006 Received shutdown signal, test time was about 0.761134 seconds 00:22:05.006 00:22:05.006 Latency(us) 00:22:05.006 [2024-11-28T07:20:47.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.006 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.006 Verification LBA range: start 0x0 length 0x400 00:22:05.006 Nvme1n1 : 0.74 258.53 16.16 0.00 0.00 244421.97 19375.86 222480.47 00:22:05.006 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.006 Verification LBA range: start 0x0 length 0x400 00:22:05.006 Nvme2n1 : 0.73 261.60 16.35 0.00 0.00 236120.75 18578.03 206067.98 00:22:05.006 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.006 Verification LBA range: start 0x0 length 0x400 00:22:05.006 Nvme3n1 : 0.76 345.95 21.62 0.00 0.00 173954.80 4103.12 206979.78 00:22:05.006 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.006 Verification LBA range: start 0x0 length 0x400 00:22:05.006 Nvme4n1 : 0.75 288.42 18.03 0.00 0.00 200090.49 10656.72 219745.06 00:22:05.006 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.006 Verification LBA range: start 0x0 length 0x400 00:22:05.006 Nvme5n1 : 0.76 253.96 15.87 0.00 0.00 227569.31 18350.08 232510.33 00:22:05.006 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.006 Verification LBA range: start 0x0 length 0x400 00:22:05.006 Nvme6n1 : 0.75 255.39 15.96 0.00 0.00 220931.12 17552.25 217921.45 00:22:05.006 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.006 Verification LBA range: start 0x0 length 0x400 00:22:05.006 Nvme7n1 : 0.74 260.25 16.27 0.00 0.00 210943.41 21769.35 203332.56 00:22:05.006 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.006 Verification LBA range: start 0x0 length 0x400 00:22:05.006 Nvme8n1 : 0.75 263.96 16.50 0.00 0.00 201507.72 5670.29 217921.45 00:22:05.006 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.006 Verification LBA range: start 0x0 length 0x400 00:22:05.006 Nvme9n1 : 0.73 268.39 16.77 0.00 0.00 192680.13 2664.18 186920.07 00:22:05.006 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:05.006 Verification LBA range: start 0x0 length 0x400 00:22:05.006 Nvme10n1 : 0.76 252.48 15.78 0.00 0.00 202662.96 18805.98 242540.19 00:22:05.006 [2024-11-28T07:20:47.275Z] =================================================================================================================== 00:22:05.006 [2024-11-28T07:20:47.275Z] Total : 2708.93 169.31 0.00 0.00 209617.53 2664.18 242540.19 00:22:05.264 08:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1419203 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:06.199 rmmod nvme_tcp 00:22:06.199 rmmod nvme_fabrics 00:22:06.199 rmmod nvme_keyring 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1419203 ']' 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1419203 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1419203 ']' 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1419203 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1419203 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1419203' 00:22:06.199 killing process with pid 1419203 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1419203 00:22:06.199 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1419203 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.768 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.674 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:08.674 00:22:08.674 real 0m7.311s 00:22:08.674 user 0m21.445s 00:22:08.674 sys 0m1.343s 00:22:08.674 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.674 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.674 ************************************ 00:22:08.674 END TEST nvmf_shutdown_tc2 00:22:08.674 ************************************ 00:22:08.674 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:08.674 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:08.674 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.674 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:08.934 ************************************ 00:22:08.934 START TEST nvmf_shutdown_tc3 00:22:08.934 ************************************ 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.934 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:08.935 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:08.935 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:08.935 Found net devices under 0000:86:00.0: cvl_0_0 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:08.935 Found net devices under 0000:86:00.1: cvl_0_1 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.935 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.935 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.935 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.935 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.935 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.935 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.194 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.194 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.194 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:22:09.194 00:22:09.194 --- 10.0.0.2 ping statistics --- 00:22:09.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.194 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:09.194 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:22:09.194 00:22:09.194 --- 10.0.0.1 ping statistics --- 00:22:09.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.194 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:22:09.194 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.194 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:09.194 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.194 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.194 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.194 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.194 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1420693 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1420693 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1420693 ']' 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.195 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.195 [2024-11-28 08:20:51.314759] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:22:09.195 [2024-11-28 08:20:51.314802] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.195 [2024-11-28 08:20:51.379269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.195 [2024-11-28 08:20:51.421586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.195 [2024-11-28 08:20:51.421623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.195 [2024-11-28 08:20:51.421631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.195 [2024-11-28 08:20:51.421637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.195 [2024-11-28 08:20:51.421642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.195 [2024-11-28 08:20:51.423126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.195 [2024-11-28 08:20:51.423216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.195 [2024-11-28 08:20:51.423325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.195 [2024-11-28 08:20:51.423325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:09.453 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.454 [2024-11-28 08:20:51.560826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.454 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.454 Malloc1 00:22:09.454 [2024-11-28 08:20:51.670837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.454 Malloc2 00:22:09.712 Malloc3 00:22:09.712 Malloc4 00:22:09.712 Malloc5 00:22:09.712 Malloc6 00:22:09.712 Malloc7 00:22:09.712 Malloc8 00:22:09.971 Malloc9 00:22:09.971 Malloc10 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1420753 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1420753 /var/tmp/bdevperf.sock 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1420753 ']' 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.971 { 00:22:09.971 "params": { 00:22:09.971 "name": "Nvme$subsystem", 00:22:09.971 "trtype": "$TEST_TRANSPORT", 00:22:09.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.971 "adrfam": "ipv4", 00:22:09.971 "trsvcid": "$NVMF_PORT", 00:22:09.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.971 "hdgst": ${hdgst:-false}, 00:22:09.971 "ddgst": ${ddgst:-false} 00:22:09.971 }, 00:22:09.971 "method": "bdev_nvme_attach_controller" 00:22:09.971 } 00:22:09.971 EOF 00:22:09.971 )") 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.971 { 00:22:09.971 "params": { 00:22:09.971 "name": "Nvme$subsystem", 00:22:09.971 "trtype": "$TEST_TRANSPORT", 00:22:09.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.971 "adrfam": "ipv4", 00:22:09.971 "trsvcid": "$NVMF_PORT", 00:22:09.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.971 "hdgst": ${hdgst:-false}, 00:22:09.971 "ddgst": ${ddgst:-false} 00:22:09.971 }, 00:22:09.971 "method": "bdev_nvme_attach_controller" 00:22:09.971 } 00:22:09.971 EOF 00:22:09.971 )") 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.971 { 00:22:09.971 "params": { 00:22:09.971 "name": "Nvme$subsystem", 00:22:09.971 "trtype": "$TEST_TRANSPORT", 00:22:09.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.971 "adrfam": "ipv4", 00:22:09.971 "trsvcid": "$NVMF_PORT", 00:22:09.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.971 "hdgst": ${hdgst:-false}, 00:22:09.971 "ddgst": ${ddgst:-false} 00:22:09.971 }, 00:22:09.971 "method": "bdev_nvme_attach_controller" 00:22:09.971 } 00:22:09.971 EOF 00:22:09.971 )") 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.971 { 00:22:09.971 "params": { 00:22:09.971 "name": "Nvme$subsystem", 00:22:09.971 "trtype": "$TEST_TRANSPORT", 00:22:09.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.971 "adrfam": "ipv4", 00:22:09.971 "trsvcid": "$NVMF_PORT", 00:22:09.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.971 "hdgst": ${hdgst:-false}, 00:22:09.971 "ddgst": ${ddgst:-false} 00:22:09.971 }, 00:22:09.971 "method": "bdev_nvme_attach_controller" 00:22:09.971 } 00:22:09.971 EOF 00:22:09.971 )") 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.971 { 00:22:09.971 "params": { 00:22:09.971 "name": "Nvme$subsystem", 00:22:09.971 "trtype": "$TEST_TRANSPORT", 00:22:09.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.971 "adrfam": "ipv4", 00:22:09.971 "trsvcid": "$NVMF_PORT", 00:22:09.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.971 "hdgst": ${hdgst:-false}, 00:22:09.971 "ddgst": ${ddgst:-false} 00:22:09.971 }, 00:22:09.971 "method": "bdev_nvme_attach_controller" 00:22:09.971 } 00:22:09.971 EOF 00:22:09.971 )") 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.971 { 00:22:09.971 "params": { 00:22:09.971 "name": "Nvme$subsystem", 00:22:09.971 "trtype": "$TEST_TRANSPORT", 00:22:09.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.971 "adrfam": "ipv4", 00:22:09.971 "trsvcid": "$NVMF_PORT", 00:22:09.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.971 "hdgst": ${hdgst:-false}, 00:22:09.971 "ddgst": ${ddgst:-false} 00:22:09.971 }, 00:22:09.971 "method": "bdev_nvme_attach_controller" 00:22:09.971 } 00:22:09.971 EOF 00:22:09.971 )") 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.971 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.972 { 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme$subsystem", 00:22:09.972 "trtype": "$TEST_TRANSPORT", 00:22:09.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "$NVMF_PORT", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.972 "hdgst": ${hdgst:-false}, 00:22:09.972 "ddgst": ${ddgst:-false} 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 } 00:22:09.972 EOF 00:22:09.972 )") 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.972 [2024-11-28 08:20:52.155682] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:22:09.972 [2024-11-28 08:20:52.155729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420753 ] 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.972 { 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme$subsystem", 00:22:09.972 "trtype": "$TEST_TRANSPORT", 00:22:09.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "$NVMF_PORT", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.972 "hdgst": ${hdgst:-false}, 00:22:09.972 "ddgst": ${ddgst:-false} 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 } 00:22:09.972 EOF 00:22:09.972 )") 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.972 { 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme$subsystem", 00:22:09.972 "trtype": "$TEST_TRANSPORT", 00:22:09.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "$NVMF_PORT", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.972 "hdgst": ${hdgst:-false}, 00:22:09.972 "ddgst": ${ddgst:-false} 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 } 00:22:09.972 EOF 00:22:09.972 )") 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.972 { 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme$subsystem", 00:22:09.972 "trtype": "$TEST_TRANSPORT", 00:22:09.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "$NVMF_PORT", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.972 "hdgst": ${hdgst:-false}, 00:22:09.972 "ddgst": ${ddgst:-false} 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 } 00:22:09.972 EOF 00:22:09.972 )") 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:09.972 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme1", 00:22:09.972 "trtype": "tcp", 00:22:09.972 "traddr": "10.0.0.2", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "4420", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.972 "hdgst": false, 00:22:09.972 "ddgst": false 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 },{ 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme2", 00:22:09.972 "trtype": "tcp", 00:22:09.972 "traddr": "10.0.0.2", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "4420", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:09.972 "hdgst": false, 00:22:09.972 "ddgst": false 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 },{ 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme3", 00:22:09.972 "trtype": "tcp", 00:22:09.972 "traddr": "10.0.0.2", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "4420", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:09.972 "hdgst": false, 00:22:09.972 "ddgst": false 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 },{ 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme4", 00:22:09.972 "trtype": "tcp", 00:22:09.972 "traddr": "10.0.0.2", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "4420", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:09.972 "hdgst": false, 00:22:09.972 "ddgst": false 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 },{ 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme5", 00:22:09.972 "trtype": "tcp", 00:22:09.972 "traddr": "10.0.0.2", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "4420", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:09.972 "hdgst": false, 00:22:09.972 "ddgst": false 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 },{ 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme6", 00:22:09.972 "trtype": "tcp", 00:22:09.972 "traddr": "10.0.0.2", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "4420", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:09.972 "hdgst": false, 00:22:09.972 "ddgst": false 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 },{ 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme7", 00:22:09.972 "trtype": "tcp", 00:22:09.972 "traddr": "10.0.0.2", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "4420", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:09.972 "hdgst": false, 00:22:09.972 "ddgst": false 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 },{ 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme8", 00:22:09.972 "trtype": "tcp", 00:22:09.972 "traddr": "10.0.0.2", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "4420", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:09.972 "hdgst": false, 00:22:09.972 "ddgst": false 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 },{ 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme9", 00:22:09.972 "trtype": "tcp", 00:22:09.972 "traddr": "10.0.0.2", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "4420", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:09.972 "hdgst": false, 00:22:09.972 "ddgst": false 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 },{ 00:22:09.972 "params": { 00:22:09.972 "name": "Nvme10", 00:22:09.972 "trtype": "tcp", 00:22:09.972 "traddr": "10.0.0.2", 00:22:09.972 "adrfam": "ipv4", 00:22:09.972 "trsvcid": "4420", 00:22:09.972 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:09.972 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:09.972 "hdgst": false, 00:22:09.972 "ddgst": false 00:22:09.972 }, 00:22:09.972 "method": "bdev_nvme_attach_controller" 00:22:09.972 }' 00:22:09.972 [2024-11-28 08:20:52.219807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.231 [2024-11-28 08:20:52.261953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.606 Running I/O for 10 seconds... 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:11.865 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:11.866 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:11.866 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:11.866 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.866 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.866 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.866 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:11.866 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:11.866 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:12.125 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:12.125 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:12.125 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:12.125 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:12.125 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.125 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=193 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 193 -ge 100 ']' 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1420693 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1420693 ']' 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1420693 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1420693 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1420693' 00:22:12.401 killing process with pid 1420693 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1420693 00:22:12.401 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1420693 00:22:12.401 [2024-11-28 08:20:54.482576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.482998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.401 [2024-11-28 08:20:54.483126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.483135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.483146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.483156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.483166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.483181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.483191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.483201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.483211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.483221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.483231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.483240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955850 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.484995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.485128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x958400 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.486372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.486383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.486390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.486395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.486401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.486407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.486414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.486420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.402 [2024-11-28 08:20:54.486425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.486749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955d20 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.403 [2024-11-28 08:20:54.488373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.488570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9561f0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.489994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.404 [2024-11-28 08:20:54.490131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.490137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.490143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.490149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.490156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.490162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.490168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.490174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.490181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9566e0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.405 [2024-11-28 08:20:54.491925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9570a0 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.493400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957570 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.406 [2024-11-28 08:20:54.494398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.494586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957a40 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6760 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with t[2024-11-28 08:20:54.495265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(6) to be set 00:22:12.407 id:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6cc0 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-28 08:20:54.495331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 he state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-28 08:20:54.495355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 he state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with t[2024-11-28 08:20:54.495381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3e50 is same he state(6) to be set 00:22:12.407 with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.407 [2024-11-28 08:20:54.495421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.407 [2024-11-28 08:20:54.495426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.407 [2024-11-28 08:20:54.495432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with t[2024-11-28 08:20:54.495434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:22:12.407 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4a80 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with t[2024-11-28 08:20:54.495531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:22:12.408 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-28 08:20:54.495564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 he state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8d610 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with t[2024-11-28 08:20:54.495607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(6) to be set 00:22:12.408 id:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with t[2024-11-28 08:20:54.495667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:22:12.408 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78d30 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-28 08:20:54.495702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 he state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with t[2024-11-28 08:20:54.495738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(6) to be set 00:22:12.408 id:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d200 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with t[2024-11-28 08:20:54.495806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:22:12.408 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-28 08:20:54.495816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 he state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.408 [2024-11-28 08:20:54.495863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4e70 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.408 [2024-11-28 08:20:54.495897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.408 [2024-11-28 08:20:54.495900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.495907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with t[2024-11-28 08:20:54.495909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(6) to be set 00:22:12.409 id:0 cdw10:00000000 cdw11:00000000 00:22:12.409 [2024-11-28 08:20:54.495919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.495920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.409 [2024-11-28 08:20:54.495927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.409 [2024-11-28 08:20:54.495930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957f10 is same with the state(6) to be set 00:22:12.409 [2024-11-28 08:20:54.495935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.495943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.409 [2024-11-28 08:20:54.495956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.495962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd791c0 is same with the state(6) to be set 00:22:12.409 [2024-11-28 08:20:54.496203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.409 [2024-11-28 08:20:54.496744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.409 [2024-11-28 08:20:54.496753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.496986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.496994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.497178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.497186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1180030 is same with the state(6) to be set 00:22:12.410 [2024-11-28 08:20:54.498927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:12.410 [2024-11-28 08:20:54.498967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a3e50 (9): Bad file descriptor 00:22:12.410 [2024-11-28 08:20:54.500004] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.410 [2024-11-28 08:20:54.500041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.500052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.500064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.500072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.500081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.500088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.500103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.410 [2024-11-28 08:20:54.500110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.410 [2024-11-28 08:20:54.500119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.411 [2024-11-28 08:20:54.500706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.411 [2024-11-28 08:20:54.500712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.500951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.500958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.501078] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.412 [2024-11-28 08:20:54.501171] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.412 [2024-11-28 08:20:54.501359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.412 [2024-11-28 08:20:54.501373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a3e50 with addr=10.0.0.2, port=4420 00:22:12.412 [2024-11-28 08:20:54.501381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3e50 is same with the state(6) to be set 00:22:12.412 [2024-11-28 08:20:54.501424] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.412 [2024-11-28 08:20:54.501466] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.412 [2024-11-28 08:20:54.501510] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.412 [2024-11-28 08:20:54.501552] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.412 [2024-11-28 08:20:54.502544] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.412 [2024-11-28 08:20:54.502570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:12.412 [2024-11-28 08:20:54.502590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8d610 (9): Bad file descriptor 00:22:12.412 [2024-11-28 08:20:54.502602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a3e50 (9): Bad file descriptor 00:22:12.412 [2024-11-28 08:20:54.502628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.412 [2024-11-28 08:20:54.502902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.412 [2024-11-28 08:20:54.502910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.502916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.502925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.502931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.502939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.502946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.502959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.502966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.502974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.502981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.502989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.502996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.413 [2024-11-28 08:20:54.503507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.413 [2024-11-28 08:20:54.503515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.414 [2024-11-28 08:20:54.503522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.503530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.414 [2024-11-28 08:20:54.503537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.503545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.414 [2024-11-28 08:20:54.503552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.503560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.414 [2024-11-28 08:20:54.503566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.503574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.414 [2024-11-28 08:20:54.503581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.503589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.414 [2024-11-28 08:20:54.503598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.503606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7e210 is same with the state(6) to be set 00:22:12.414 [2024-11-28 08:20:54.503754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:12.414 [2024-11-28 08:20:54.503764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:12.414 [2024-11-28 08:20:54.503773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:12.414 [2024-11-28 08:20:54.503781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:12.414 [2024-11-28 08:20:54.504983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:12.414 [2024-11-28 08:20:54.505004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d200 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.505248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.414 [2024-11-28 08:20:54.505262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8d610 with addr=10.0.0.2, port=4420 00:22:12.414 [2024-11-28 08:20:54.505269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8d610 is same with the state(6) to be set 00:22:12.414 [2024-11-28 08:20:54.505330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8d610 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.505342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6760 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.505360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6cc0 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.505377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4a80 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.505411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.414 [2024-11-28 08:20:54.505420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.505428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.414 [2024-11-28 08:20:54.505435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.505443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.414 [2024-11-28 08:20:54.505449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.505457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.414 [2024-11-28 08:20:54.505464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.505471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6ef0 is same with the state(6) to be set 00:22:12.414 [2024-11-28 08:20:54.505486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd78d30 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.505501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4e70 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.505515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd791c0 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.506033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.414 [2024-11-28 08:20:54.506047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d200 with addr=10.0.0.2, port=4420 00:22:12.414 [2024-11-28 08:20:54.506055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d200 is same with the state(6) to be set 00:22:12.414 [2024-11-28 08:20:54.506063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:12.414 [2024-11-28 08:20:54.506069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:12.414 [2024-11-28 08:20:54.506076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:12.414 [2024-11-28 08:20:54.506083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:12.414 [2024-11-28 08:20:54.506126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d200 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.506166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:12.414 [2024-11-28 08:20:54.506174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:12.414 [2024-11-28 08:20:54.506180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:12.414 [2024-11-28 08:20:54.506186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:12.414 [2024-11-28 08:20:54.509620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:12.414 [2024-11-28 08:20:54.509887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.414 [2024-11-28 08:20:54.509901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a3e50 with addr=10.0.0.2, port=4420 00:22:12.414 [2024-11-28 08:20:54.509909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3e50 is same with the state(6) to be set 00:22:12.414 [2024-11-28 08:20:54.509943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a3e50 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.509982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:12.414 [2024-11-28 08:20:54.509989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:12.414 [2024-11-28 08:20:54.509996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:12.414 [2024-11-28 08:20:54.510003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:12.414 [2024-11-28 08:20:54.513853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:12.414 [2024-11-28 08:20:54.514133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.414 [2024-11-28 08:20:54.514147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8d610 with addr=10.0.0.2, port=4420 00:22:12.414 [2024-11-28 08:20:54.514155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8d610 is same with the state(6) to be set 00:22:12.414 [2024-11-28 08:20:54.514189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8d610 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.514223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:12.414 [2024-11-28 08:20:54.514230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:12.414 [2024-11-28 08:20:54.514236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:12.414 [2024-11-28 08:20:54.514247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:12.414 [2024-11-28 08:20:54.515385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6ef0 (9): Bad file descriptor 00:22:12.414 [2024-11-28 08:20:54.515520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.414 [2024-11-28 08:20:54.515532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.515544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.414 [2024-11-28 08:20:54.515551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.515560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.414 [2024-11-28 08:20:54.515566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.414 [2024-11-28 08:20:54.515575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.414 [2024-11-28 08:20:54.515582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.515988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.515995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.415 [2024-11-28 08:20:54.516189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.415 [2024-11-28 08:20:54.516196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.516496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.516503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7d1a0 is same with the state(6) to be set 00:22:12.416 [2024-11-28 08:20:54.517527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.416 [2024-11-28 08:20:54.517819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.416 [2024-11-28 08:20:54.517828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.517834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.517842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.517849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.517857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.517863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.517871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.517880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.517888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.517895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.517903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.517910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.517919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.517926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.517934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.517941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.517953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.517961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.517969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.517975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.517984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.517991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.517999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.417 [2024-11-28 08:20:54.518415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.417 [2024-11-28 08:20:54.518421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.518430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.518438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.518446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.518452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.518462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.518469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.518478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.518484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.518492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.518499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.518508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.518515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.518523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1171d50 is same with the state(6) to be set 00:22:12.418 [2024-11-28 08:20:54.519538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.519990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.519998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.520005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.520013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.520019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.418 [2024-11-28 08:20:54.520028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.418 [2024-11-28 08:20:54.520035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.520518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.419 [2024-11-28 08:20:54.520525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117ee30 is same with the state(6) to be set 00:22:12.419 [2024-11-28 08:20:54.521527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.419 [2024-11-28 08:20:54.521540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.521986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.521992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.522001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.522008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.522016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.522027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.522036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.522042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.522051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.522057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.522066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.522072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.522081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.522087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.522096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.522102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.522111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.522117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.522126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.522133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.522141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.420 [2024-11-28 08:20:54.522148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.420 [2024-11-28 08:20:54.522156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.522514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.522522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2c770 is same with the state(6) to be set 00:22:12.421 [2024-11-28 08:20:54.523533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.421 [2024-11-28 08:20:54.523766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.421 [2024-11-28 08:20:54.523775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.523985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.523993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.422 [2024-11-28 08:20:54.524365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.422 [2024-11-28 08:20:54.524374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.524383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.524390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.524398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.524404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.524412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.524419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.524428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.524434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.524442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.524448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.524457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.524464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.524472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.524478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.524486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.524493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.524500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.524507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.524514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c79d0 is same with the state(6) to be set 00:22:12.423 [2024-11-28 08:20:54.525531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.423 [2024-11-28 08:20:54.525921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.423 [2024-11-28 08:20:54.525929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.525936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.525944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.525954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.525963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.525971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.525980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.525986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.525995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.424 [2024-11-28 08:20:54.526513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.424 [2024-11-28 08:20:54.526520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027770 is same with the state(6) to be set 00:22:12.424 [2024-11-28 08:20:54.527493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:12.424 [2024-11-28 08:20:54.527513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:12.425 [2024-11-28 08:20:54.527527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:12.425 [2024-11-28 08:20:54.527538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:12.425 [2024-11-28 08:20:54.527602] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:12.425 [2024-11-28 08:20:54.527615] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:12.425 [2024-11-28 08:20:54.527626] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:12.425 [2024-11-28 08:20:54.527700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:12.425 [2024-11-28 08:20:54.527713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:12.425 [2024-11-28 08:20:54.527723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:12.425 [2024-11-28 08:20:54.527978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.425 [2024-11-28 08:20:54.528003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d200 with addr=10.0.0.2, port=4420 00:22:12.425 [2024-11-28 08:20:54.528012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d200 is same with the state(6) to be set 00:22:12.425 [2024-11-28 08:20:54.528262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.425 [2024-11-28 08:20:54.528272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd791c0 with addr=10.0.0.2, port=4420 00:22:12.425 [2024-11-28 08:20:54.528280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd791c0 is same with the state(6) to be set 00:22:12.425 [2024-11-28 08:20:54.528430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.425 [2024-11-28 08:20:54.528441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd78d30 with addr=10.0.0.2, port=4420 00:22:12.425 [2024-11-28 08:20:54.528448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78d30 is same with the state(6) to be set 00:22:12.425 [2024-11-28 08:20:54.528648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.425 [2024-11-28 08:20:54.528658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4e70 with addr=10.0.0.2, port=4420 00:22:12.425 [2024-11-28 08:20:54.528665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4e70 is same with the state(6) to be set 00:22:12.425 [2024-11-28 08:20:54.529832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.529845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.529858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.529865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.529874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.529881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.529890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.529897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.529910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.529917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.529925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.529932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.529940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.529951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.529959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.529966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.529974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.529981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.529990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.529996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.425 [2024-11-28 08:20:54.530304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.425 [2024-11-28 08:20:54.530312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.426 [2024-11-28 08:20:54.530814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.426 [2024-11-28 08:20:54.530821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026500 is same with the state(6) to be set 00:22:12.426 [2024-11-28 08:20:54.532027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:12.426 [2024-11-28 08:20:54.532043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:12.426 task offset: 24576 on job bdev=Nvme5n1 fails 00:22:12.426 00:22:12.426 Latency(us) 00:22:12.426 [2024-11-28T07:20:54.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.426 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.426 Job: Nvme1n1 ended in about 0.85 seconds with error 00:22:12.426 Verification LBA range: start 0x0 length 0x400 00:22:12.426 Nvme1n1 : 0.85 226.23 14.14 75.41 0.00 209933.13 22681.15 214274.23 00:22:12.426 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.426 Job: Nvme2n1 ended in about 0.84 seconds with error 00:22:12.426 Verification LBA range: start 0x0 length 0x400 00:22:12.426 Nvme2n1 : 0.84 229.66 14.35 76.55 0.00 202766.47 4929.45 208803.39 00:22:12.426 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.426 Job: Nvme3n1 ended in about 0.85 seconds with error 00:22:12.426 Verification LBA range: start 0x0 length 0x400 00:22:12.426 Nvme3n1 : 0.85 225.70 14.11 75.23 0.00 202440.79 14132.98 220656.86 00:22:12.427 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.427 Job: Nvme4n1 ended in about 0.85 seconds with error 00:22:12.427 Verification LBA range: start 0x0 length 0x400 00:22:12.427 Nvme4n1 : 0.85 225.17 14.07 75.06 0.00 198959.86 18122.13 216097.84 00:22:12.427 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.427 Job: Nvme5n1 ended in about 0.83 seconds with error 00:22:12.427 Verification LBA range: start 0x0 length 0x400 00:22:12.427 Nvme5n1 : 0.83 231.36 14.46 77.12 0.00 189228.19 3162.82 225215.89 00:22:12.427 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.427 Job: Nvme6n1 ended in about 0.85 seconds with error 00:22:12.427 Verification LBA range: start 0x0 length 0x400 00:22:12.427 Nvme6n1 : 0.85 149.76 9.36 74.88 0.00 255192.97 18919.96 238892.97 00:22:12.427 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.427 Job: Nvme7n1 ended in about 0.83 seconds with error 00:22:12.427 Verification LBA range: start 0x0 length 0x400 00:22:12.427 Nvme7n1 : 0.83 235.07 14.69 71.96 0.00 182100.12 2450.48 222480.47 00:22:12.427 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.427 Job: Nvme8n1 ended in about 0.86 seconds with error 00:22:12.427 Verification LBA range: start 0x0 length 0x400 00:22:12.427 Nvme8n1 : 0.86 161.09 10.07 74.71 0.00 233138.18 14531.90 222480.47 00:22:12.427 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.427 Job: Nvme9n1 ended in about 0.86 seconds with error 00:22:12.427 Verification LBA range: start 0x0 length 0x400 00:22:12.427 Nvme9n1 : 0.86 148.32 9.27 74.16 0.00 242222.60 33280.89 223392.28 00:22:12.427 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.427 Job: Nvme10n1 ended in about 0.86 seconds with error 00:22:12.427 Verification LBA range: start 0x0 length 0x400 00:22:12.427 Nvme10n1 : 0.86 149.07 9.32 74.53 0.00 235624.18 17894.18 240716.58 00:22:12.427 [2024-11-28T07:20:54.696Z] =================================================================================================================== 00:22:12.427 [2024-11-28T07:20:54.696Z] Total : 1981.43 123.84 749.62 0.00 212319.47 2450.48 240716.58 00:22:12.427 [2024-11-28 08:20:54.562643] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:12.427 [2024-11-28 08:20:54.562691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:12.427 [2024-11-28 08:20:54.563006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.427 [2024-11-28 08:20:54.563025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4a80 with addr=10.0.0.2, port=4420 00:22:12.427 [2024-11-28 08:20:54.563036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4a80 is same with the state(6) to be set 00:22:12.427 [2024-11-28 08:20:54.563191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.427 [2024-11-28 08:20:54.563202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e6760 with addr=10.0.0.2, port=4420 00:22:12.427 [2024-11-28 08:20:54.563210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6760 is same with the state(6) to be set 00:22:12.427 [2024-11-28 08:20:54.563428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.427 [2024-11-28 08:20:54.563445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6cc0 with addr=10.0.0.2, port=4420 00:22:12.427 [2024-11-28 08:20:54.563453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6cc0 is same with the state(6) to be set 00:22:12.427 [2024-11-28 08:20:54.563467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d200 (9): Bad file descriptor 00:22:12.427 [2024-11-28 08:20:54.563479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd791c0 (9): Bad file descriptor 00:22:12.427 [2024-11-28 08:20:54.563488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd78d30 (9): Bad file descriptor 00:22:12.427 [2024-11-28 08:20:54.563497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4e70 (9): Bad file descriptor 00:22:12.427 [2024-11-28 08:20:54.563845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.427 [2024-11-28 08:20:54.563861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a3e50 with addr=10.0.0.2, port=4420 00:22:12.427 [2024-11-28 08:20:54.563869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3e50 is same with the state(6) to be set 00:22:12.427 [2024-11-28 08:20:54.564103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.427 [2024-11-28 08:20:54.564115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8d610 with addr=10.0.0.2, port=4420 00:22:12.427 [2024-11-28 08:20:54.564123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8d610 is same with the state(6) to be set 00:22:12.427 [2024-11-28 08:20:54.564274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.427 [2024-11-28 08:20:54.564285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6ef0 with addr=10.0.0.2, port=4420 00:22:12.427 [2024-11-28 08:20:54.564293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6ef0 is same with the state(6) to be set 00:22:12.427 [2024-11-28 08:20:54.564302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4a80 (9): Bad file descriptor 00:22:12.427 [2024-11-28 08:20:54.564311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e6760 (9): Bad file descriptor 00:22:12.427 [2024-11-28 08:20:54.564320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6cc0 (9): Bad file descriptor 00:22:12.427 [2024-11-28 08:20:54.564328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:12.427 [2024-11-28 08:20:54.564334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:12.427 [2024-11-28 08:20:54.564342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:12.427 [2024-11-28 08:20:54.564352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:12.427 [2024-11-28 08:20:54.564360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:12.427 [2024-11-28 08:20:54.564366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:12.427 [2024-11-28 08:20:54.564372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:12.427 [2024-11-28 08:20:54.564378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:12.427 [2024-11-28 08:20:54.564385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:12.427 [2024-11-28 08:20:54.564391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:12.427 [2024-11-28 08:20:54.564400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:12.427 [2024-11-28 08:20:54.564407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:12.427 [2024-11-28 08:20:54.564414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:12.427 [2024-11-28 08:20:54.564419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:12.427 [2024-11-28 08:20:54.564426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:12.427 [2024-11-28 08:20:54.564432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:12.427 [2024-11-28 08:20:54.564477] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:12.427 [2024-11-28 08:20:54.564488] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:12.427 [2024-11-28 08:20:54.564499] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:12.427 [2024-11-28 08:20:54.564836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a3e50 (9): Bad file descriptor 00:22:12.427 [2024-11-28 08:20:54.564850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8d610 (9): Bad file descriptor 00:22:12.427 [2024-11-28 08:20:54.564859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6ef0 (9): Bad file descriptor 00:22:12.427 [2024-11-28 08:20:54.564867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:12.427 [2024-11-28 08:20:54.564873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:12.427 [2024-11-28 08:20:54.564879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:12.427 [2024-11-28 08:20:54.564886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:12.427 [2024-11-28 08:20:54.564893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:12.427 [2024-11-28 08:20:54.564898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:12.427 [2024-11-28 08:20:54.564905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:12.427 [2024-11-28 08:20:54.564911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:12.427 [2024-11-28 08:20:54.564917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:12.427 [2024-11-28 08:20:54.564923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:12.427 [2024-11-28 08:20:54.564929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:12.427 [2024-11-28 08:20:54.564935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:12.427 [2024-11-28 08:20:54.565167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:12.427 [2024-11-28 08:20:54.565181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:12.427 [2024-11-28 08:20:54.565189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:12.427 [2024-11-28 08:20:54.565197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:12.427 [2024-11-28 08:20:54.565224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:12.427 [2024-11-28 08:20:54.565234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:12.427 [2024-11-28 08:20:54.565241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:12.427 [2024-11-28 08:20:54.565247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:12.427 [2024-11-28 08:20:54.565254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:12.428 [2024-11-28 08:20:54.565260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:12.428 [2024-11-28 08:20:54.565266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:12.428 [2024-11-28 08:20:54.565272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:12.428 [2024-11-28 08:20:54.565278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:12.428 [2024-11-28 08:20:54.565284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:12.428 [2024-11-28 08:20:54.565290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:12.428 [2024-11-28 08:20:54.565296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:12.428 [2024-11-28 08:20:54.565553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.428 [2024-11-28 08:20:54.565568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4e70 with addr=10.0.0.2, port=4420 00:22:12.428 [2024-11-28 08:20:54.565577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4e70 is same with the state(6) to be set 00:22:12.428 [2024-11-28 08:20:54.565727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.428 [2024-11-28 08:20:54.565738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd78d30 with addr=10.0.0.2, port=4420 00:22:12.428 [2024-11-28 08:20:54.565745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78d30 is same with the state(6) to be set 00:22:12.428 [2024-11-28 08:20:54.565883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.428 [2024-11-28 08:20:54.565893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd791c0 with addr=10.0.0.2, port=4420 00:22:12.428 [2024-11-28 08:20:54.565900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd791c0 is same with the state(6) to be set 00:22:12.428 [2024-11-28 08:20:54.566130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.428 [2024-11-28 08:20:54.566143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd6d200 with addr=10.0.0.2, port=4420 00:22:12.428 [2024-11-28 08:20:54.566150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6d200 is same with the state(6) to be set 00:22:12.428 [2024-11-28 08:20:54.566182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4e70 (9): Bad file descriptor 00:22:12.428 [2024-11-28 08:20:54.566193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd78d30 (9): Bad file descriptor 00:22:12.428 [2024-11-28 08:20:54.566202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd791c0 (9): Bad file descriptor 00:22:12.428 [2024-11-28 08:20:54.566211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6d200 (9): Bad file descriptor 00:22:12.428 [2024-11-28 08:20:54.566239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:12.428 [2024-11-28 08:20:54.566247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:12.428 [2024-11-28 08:20:54.566256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:12.428 [2024-11-28 08:20:54.566264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:12.428 [2024-11-28 08:20:54.566271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:12.428 [2024-11-28 08:20:54.566277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:12.428 [2024-11-28 08:20:54.566283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:12.428 [2024-11-28 08:20:54.566289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:12.428 [2024-11-28 08:20:54.566295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:12.428 [2024-11-28 08:20:54.566301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:12.428 [2024-11-28 08:20:54.566308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:12.428 [2024-11-28 08:20:54.566313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:12.428 [2024-11-28 08:20:54.566319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:12.428 [2024-11-28 08:20:54.566325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:12.428 [2024-11-28 08:20:54.566332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:12.428 [2024-11-28 08:20:54.566338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:12.688 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:13.624 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1420753 00:22:13.624 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:13.624 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1420753 00:22:13.624 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:13.625 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.625 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:13.625 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.625 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1420753 00:22:13.625 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:13.625 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.625 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:13.884 rmmod nvme_tcp 00:22:13.884 rmmod nvme_fabrics 00:22:13.884 rmmod nvme_keyring 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1420693 ']' 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1420693 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1420693 ']' 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1420693 00:22:13.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1420693) - No such process 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1420693 is not found' 00:22:13.884 Process with pid 1420693 is not found 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.884 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.790 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:15.790 00:22:15.790 real 0m7.106s 00:22:15.790 user 0m16.221s 00:22:15.790 sys 0m1.303s 00:22:15.790 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.790 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.790 ************************************ 00:22:15.790 END TEST nvmf_shutdown_tc3 00:22:15.790 ************************************ 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:16.050 ************************************ 00:22:16.050 START TEST nvmf_shutdown_tc4 00:22:16.050 ************************************ 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:16.050 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:16.051 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:16.051 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:16.051 Found net devices under 0000:86:00.0: cvl_0_0 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:16.051 Found net devices under 0000:86:00.1: cvl_0_1 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:16.051 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.052 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.052 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.052 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.052 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:16.052 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:16.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:22:16.311 00:22:16.311 --- 10.0.0.2 ping statistics --- 00:22:16.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.311 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:22:16.311 00:22:16.311 --- 10.0.0.1 ping statistics --- 00:22:16.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.311 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1422010 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1422010 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1422010 ']' 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.311 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.312 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.312 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:16.312 [2024-11-28 08:20:58.508704] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:22:16.312 [2024-11-28 08:20:58.508756] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.312 [2024-11-28 08:20:58.575080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.571 [2024-11-28 08:20:58.619114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.571 [2024-11-28 08:20:58.619152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.571 [2024-11-28 08:20:58.619159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.571 [2024-11-28 08:20:58.619167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.571 [2024-11-28 08:20:58.619173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.571 [2024-11-28 08:20:58.620610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.571 [2024-11-28 08:20:58.620696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.571 [2024-11-28 08:20:58.620800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.571 [2024-11-28 08:20:58.620800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:16.571 [2024-11-28 08:20:58.767443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.571 08:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:16.829 Malloc1 00:22:16.829 [2024-11-28 08:20:58.880425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.829 Malloc2 00:22:16.829 Malloc3 00:22:16.829 Malloc4 00:22:16.829 Malloc5 00:22:16.829 Malloc6 00:22:17.088 Malloc7 00:22:17.088 Malloc8 00:22:17.088 Malloc9 00:22:17.088 Malloc10 00:22:17.088 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.088 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:17.088 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:17.088 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:17.088 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1422087 00:22:17.088 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:17.088 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:17.346 [2024-11-28 08:20:59.366022] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1422010 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1422010 ']' 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1422010 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1422010 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1422010' 00:22:22.617 killing process with pid 1422010 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1422010 00:22:22.617 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1422010 00:22:22.617 [2024-11-28 08:21:04.377154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190ed80 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.377224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190ed80 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.377235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190ed80 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.377244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190ed80 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.377252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190ed80 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.377266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190ed80 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.377275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190ed80 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.377282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190ed80 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.378417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f250 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.379240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f720 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.379269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f720 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.379277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f720 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.379285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f720 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.379291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f720 is same with the state(6) to be set 00:22:22.617 [2024-11-28 08:21:04.379298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f720 is same with the state(6) to be set 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 [2024-11-28 08:21:04.384982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.617 Write completed with error (sct=0, sc=8) 00:22:22.617 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 [2024-11-28 08:21:04.385922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 [2024-11-28 08:21:04.386956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.618 starting I/O failed: -6 00:22:22.618 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 [2024-11-28 08:21:04.388595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.619 NVMe io qpair process completion error 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 [2024-11-28 08:21:04.389574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 [2024-11-28 08:21:04.390090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd5f0 is same with Write completed with error (sct=0, sc=8) 00:22:22.619 the state(6) to be set 00:22:22.619 starting I/O failed: -6 00:22:22.619 [2024-11-28 08:21:04.390117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd5f0 is same with the state(6) to be set 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 [2024-11-28 08:21:04.390129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd5f0 is same with starting I/O failed: -6 00:22:22.619 the state(6) to be set 00:22:22.619 [2024-11-28 08:21:04.390141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd5f0 is same with the state(6) to be set 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 [2024-11-28 08:21:04.390151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd5f0 is same with the state(6) to be set 00:22:22.619 [2024-11-28 08:21:04.390160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd5f0 is same with Write completed with error (sct=0, sc=8) 00:22:22.619 the state(6) to be set 00:22:22.619 [2024-11-28 08:21:04.390172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd5f0 is same with the state(6) to be set 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 [2024-11-28 08:21:04.390181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd5f0 is same with the state(6) to be set 00:22:22.619 [2024-11-28 08:21:04.390192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd5f0 is same with the state(6) to be set 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 [2024-11-28 08:21:04.390498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.619 starting I/O failed: -6 00:22:22.619 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 [2024-11-28 08:21:04.391509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 [2024-11-28 08:21:04.393256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.620 NVMe io qpair process completion error 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.620 starting I/O failed: -6 00:22:22.620 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 [2024-11-28 08:21:04.394176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 [2024-11-28 08:21:04.395019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 [2024-11-28 08:21:04.396060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.621 starting I/O failed: -6 00:22:22.621 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 [2024-11-28 08:21:04.397851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.622 NVMe io qpair process completion error 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 [2024-11-28 08:21:04.398906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.622 starting I/O failed: -6 00:22:22.622 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 [2024-11-28 08:21:04.399812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 [2024-11-28 08:21:04.400811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.623 Write completed with error (sct=0, sc=8) 00:22:22.623 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 [2024-11-28 08:21:04.402694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.624 NVMe io qpair process completion error 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 [2024-11-28 08:21:04.403695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 [2024-11-28 08:21:04.404566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.624 starting I/O failed: -6 00:22:22.624 starting I/O failed: -6 00:22:22.624 starting I/O failed: -6 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.624 Write completed with error (sct=0, sc=8) 00:22:22.624 starting I/O failed: -6 00:22:22.625 [2024-11-28 08:21:04.405750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 [2024-11-28 08:21:04.409619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.625 NVMe io qpair process completion error 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 [2024-11-28 08:21:04.410638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 starting I/O failed: -6 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.625 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 [2024-11-28 08:21:04.411447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 [2024-11-28 08:21:04.412483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.626 starting I/O failed: -6 00:22:22.626 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 [2024-11-28 08:21:04.415732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.627 NVMe io qpair process completion error 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 [2024-11-28 08:21:04.416658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 [2024-11-28 08:21:04.417608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.627 Write completed with error (sct=0, sc=8) 00:22:22.627 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 [2024-11-28 08:21:04.418629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 [2024-11-28 08:21:04.420617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.628 NVMe io qpair process completion error 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.628 starting I/O failed: -6 00:22:22.628 [2024-11-28 08:21:04.421611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.628 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 [2024-11-28 08:21:04.422529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 [2024-11-28 08:21:04.423534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.629 starting I/O failed: -6 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.629 starting I/O failed: -6 00:22:22.629 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 [2024-11-28 08:21:04.425527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.630 NVMe io qpair process completion error 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 [2024-11-28 08:21:04.427909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 starting I/O failed: -6 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.630 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 [2024-11-28 08:21:04.428917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 [2024-11-28 08:21:04.432884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.631 NVMe io qpair process completion error 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 [2024-11-28 08:21:04.434072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.631 starting I/O failed: -6 00:22:22.631 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 [2024-11-28 08:21:04.434925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 [2024-11-28 08:21:04.436004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.632 Write completed with error (sct=0, sc=8) 00:22:22.632 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 Write completed with error (sct=0, sc=8) 00:22:22.633 starting I/O failed: -6 00:22:22.633 [2024-11-28 08:21:04.439135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.633 NVMe io qpair process completion error 00:22:22.633 Initializing NVMe Controllers 00:22:22.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:22.633 Controller IO queue size 128, less than required. 00:22:22.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:22.633 Controller IO queue size 128, less than required. 00:22:22.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:22.633 Controller IO queue size 128, less than required. 00:22:22.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:22.633 Controller IO queue size 128, less than required. 00:22:22.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.633 Controller IO queue size 128, less than required. 00:22:22.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:22.633 Controller IO queue size 128, less than required. 00:22:22.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:22.633 Controller IO queue size 128, less than required. 00:22:22.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:22.633 Controller IO queue size 128, less than required. 00:22:22.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:22.633 Controller IO queue size 128, less than required. 00:22:22.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:22.633 Controller IO queue size 128, less than required. 00:22:22.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:22.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:22.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:22.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:22.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:22.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:22.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:22.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:22.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:22.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:22.633 Initialization complete. Launching workers. 00:22:22.633 ======================================================== 00:22:22.633 Latency(us) 00:22:22.633 Device Information : IOPS MiB/s Average min max 00:22:22.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2112.89 90.79 60585.69 720.19 100561.42 00:22:22.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2133.17 91.66 60021.03 708.75 114615.76 00:22:22.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2167.33 93.13 59086.44 766.44 112454.29 00:22:22.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2185.04 93.89 58620.25 909.06 109040.46 00:22:22.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2197.64 94.43 58299.63 731.40 102979.62 00:22:22.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2192.94 94.23 58463.93 823.36 108287.56 00:22:22.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2158.57 92.75 59423.63 941.40 107499.07 00:22:22.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2109.69 90.65 60816.67 913.69 113639.73 00:22:22.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2141.71 92.03 59929.36 526.77 107175.09 00:22:22.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2170.31 93.26 59181.17 868.22 120839.97 00:22:22.633 ======================================================== 00:22:22.633 Total : 21569.30 926.81 59431.26 526.77 120839.97 00:22:22.633 00:22:22.633 [2024-11-28 08:21:04.442172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x738900 is same with the state(6) to be set 00:22:22.633 [2024-11-28 08:21:04.442219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x738ae0 is same with the state(6) to be set 00:22:22.633 [2024-11-28 08:21:04.442250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736560 is same with the state(6) to be set 00:22:22.633 [2024-11-28 08:21:04.442280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736bc0 is same with the state(6) to be set 00:22:22.633 [2024-11-28 08:21:04.442310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x738720 is same with the state(6) to be set 00:22:22.633 [2024-11-28 08:21:04.442340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x737a70 is same with the state(6) to be set 00:22:22.633 [2024-11-28 08:21:04.442369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736890 is same with the state(6) to be set 00:22:22.633 [2024-11-28 08:21:04.442400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x736ef0 is same with the state(6) to be set 00:22:22.633 [2024-11-28 08:21:04.442433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x737410 is same with the state(6) to be set 00:22:22.633 [2024-11-28 08:21:04.442464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x737740 is same with the state(6) to be set 00:22:22.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:22.633 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1422087 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1422087 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1422087 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.568 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:23.569 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.569 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:23.569 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.569 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:23.569 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.569 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.569 rmmod nvme_tcp 00:22:23.569 rmmod nvme_fabrics 00:22:23.569 rmmod nvme_keyring 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1422010 ']' 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1422010 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1422010 ']' 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1422010 00:22:23.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1422010) - No such process 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1422010 is not found' 00:22:23.828 Process with pid 1422010 is not found 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:23.828 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:23.829 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:23.829 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:23.829 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:23.829 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:23.829 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:23.829 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.829 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.829 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.734 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:25.734 00:22:25.734 real 0m9.811s 00:22:25.734 user 0m24.920s 00:22:25.734 sys 0m5.220s 00:22:25.734 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.734 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.734 ************************************ 00:22:25.734 END TEST nvmf_shutdown_tc4 00:22:25.734 ************************************ 00:22:25.734 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:25.734 00:22:25.734 real 0m39.524s 00:22:25.734 user 1m36.880s 00:22:25.734 sys 0m13.581s 00:22:25.734 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.734 08:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:25.734 ************************************ 00:22:25.734 END TEST nvmf_shutdown 00:22:25.734 ************************************ 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:25.994 ************************************ 00:22:25.994 START TEST nvmf_nsid 00:22:25.994 ************************************ 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:25.994 * Looking for test storage... 00:22:25.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.994 --rc genhtml_branch_coverage=1 00:22:25.994 --rc genhtml_function_coverage=1 00:22:25.994 --rc genhtml_legend=1 00:22:25.994 --rc geninfo_all_blocks=1 00:22:25.994 --rc geninfo_unexecuted_blocks=1 00:22:25.994 00:22:25.994 ' 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.994 --rc genhtml_branch_coverage=1 00:22:25.994 --rc genhtml_function_coverage=1 00:22:25.994 --rc genhtml_legend=1 00:22:25.994 --rc geninfo_all_blocks=1 00:22:25.994 --rc geninfo_unexecuted_blocks=1 00:22:25.994 00:22:25.994 ' 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.994 --rc genhtml_branch_coverage=1 00:22:25.994 --rc genhtml_function_coverage=1 00:22:25.994 --rc genhtml_legend=1 00:22:25.994 --rc geninfo_all_blocks=1 00:22:25.994 --rc geninfo_unexecuted_blocks=1 00:22:25.994 00:22:25.994 ' 00:22:25.994 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.994 --rc genhtml_branch_coverage=1 00:22:25.994 --rc genhtml_function_coverage=1 00:22:25.994 --rc genhtml_legend=1 00:22:25.994 --rc geninfo_all_blocks=1 00:22:25.994 --rc geninfo_unexecuted_blocks=1 00:22:25.994 00:22:25.994 ' 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.995 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.254 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.526 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.526 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.526 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.526 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.526 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:22:31.785 00:22:31.785 --- 10.0.0.2 ping statistics --- 00:22:31.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.785 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:22:31.785 00:22:31.785 --- 10.0.0.1 ping statistics --- 00:22:31.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.785 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1427222 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1427222 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1427222 ']' 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.785 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:31.785 [2024-11-28 08:21:13.982040] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:22:31.785 [2024-11-28 08:21:13.982091] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.785 [2024-11-28 08:21:14.049568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.044 [2024-11-28 08:21:14.090774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.045 [2024-11-28 08:21:14.090811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.045 [2024-11-28 08:21:14.090819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.045 [2024-11-28 08:21:14.090825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.045 [2024-11-28 08:21:14.090831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.045 [2024-11-28 08:21:14.091431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1427276 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=49f0dcdd-e80c-4d97-abdf-d92bab3c6eb4 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=914ae3cd-29a7-449e-b64d-bf991348ed43 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=86b7a01c-31f9-407d-be45-c9cfe44eb0d7 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.045 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.045 null0 00:22:32.045 null1 00:22:32.045 null2 00:22:32.045 [2024-11-28 08:21:14.273612] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:22:32.045 [2024-11-28 08:21:14.273655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427276 ] 00:22:32.045 [2024-11-28 08:21:14.276175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.045 [2024-11-28 08:21:14.300383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.304 [2024-11-28 08:21:14.334859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.304 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.304 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1427276 /var/tmp/tgt2.sock 00:22:32.304 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1427276 ']' 00:22:32.304 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:32.304 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.304 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:32.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:32.304 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.304 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.304 [2024-11-28 08:21:14.376665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.563 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.563 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:32.563 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:32.822 [2024-11-28 08:21:14.906935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.822 [2024-11-28 08:21:14.923050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:32.822 nvme0n1 nvme0n2 00:22:32.822 nvme1n1 00:22:32.822 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:32.822 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:32.822 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:33.758 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 49f0dcdd-e80c-4d97-abdf-d92bab3c6eb4 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=49f0dcdde80c4d97abdfd92bab3c6eb4 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 49F0DCDDE80C4D97ABDFD92BAB3C6EB4 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 49F0DCDDE80C4D97ABDFD92BAB3C6EB4 == \4\9\F\0\D\C\D\D\E\8\0\C\4\D\9\7\A\B\D\F\D\9\2\B\A\B\3\C\6\E\B\4 ]] 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 914ae3cd-29a7-449e-b64d-bf991348ed43 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=914ae3cd29a7449eb64dbf991348ed43 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 914AE3CD29A7449EB64DBF991348ED43 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 914AE3CD29A7449EB64DBF991348ED43 == \9\1\4\A\E\3\C\D\2\9\A\7\4\4\9\E\B\6\4\D\B\F\9\9\1\3\4\8\E\D\4\3 ]] 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:35.137 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 86b7a01c-31f9-407d-be45-c9cfe44eb0d7 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=86b7a01c31f9407dbe45c9cfe44eb0d7 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 86B7A01C31F9407DBE45C9CFE44EB0D7 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 86B7A01C31F9407DBE45C9CFE44EB0D7 == \8\6\B\7\A\0\1\C\3\1\F\9\4\0\7\D\B\E\4\5\C\9\C\F\E\4\4\E\B\0\D\7 ]] 00:22:35.138 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1427276 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1427276 ']' 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1427276 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1427276 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1427276' 00:22:35.397 killing process with pid 1427276 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1427276 00:22:35.397 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1427276 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.657 rmmod nvme_tcp 00:22:35.657 rmmod nvme_fabrics 00:22:35.657 rmmod nvme_keyring 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1427222 ']' 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1427222 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1427222 ']' 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1427222 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1427222 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1427222' 00:22:35.657 killing process with pid 1427222 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1427222 00:22:35.657 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1427222 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.917 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.487 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:38.487 00:22:38.487 real 0m12.084s 00:22:38.487 user 0m9.512s 00:22:38.487 sys 0m5.264s 00:22:38.487 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.487 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:38.487 ************************************ 00:22:38.487 END TEST nvmf_nsid 00:22:38.487 ************************************ 00:22:38.487 08:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:38.487 00:22:38.487 real 11m46.727s 00:22:38.487 user 25m37.112s 00:22:38.487 sys 3m35.979s 00:22:38.487 08:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.487 08:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:38.487 ************************************ 00:22:38.487 END TEST nvmf_target_extra 00:22:38.487 ************************************ 00:22:38.487 08:21:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:38.487 08:21:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:38.487 08:21:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.487 08:21:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:38.487 ************************************ 00:22:38.487 START TEST nvmf_host 00:22:38.487 ************************************ 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:38.487 * Looking for test storage... 00:22:38.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:38.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.487 --rc genhtml_branch_coverage=1 00:22:38.487 --rc genhtml_function_coverage=1 00:22:38.487 --rc genhtml_legend=1 00:22:38.487 --rc geninfo_all_blocks=1 00:22:38.487 --rc geninfo_unexecuted_blocks=1 00:22:38.487 00:22:38.487 ' 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:38.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.487 --rc genhtml_branch_coverage=1 00:22:38.487 --rc genhtml_function_coverage=1 00:22:38.487 --rc genhtml_legend=1 00:22:38.487 --rc geninfo_all_blocks=1 00:22:38.487 --rc geninfo_unexecuted_blocks=1 00:22:38.487 00:22:38.487 ' 00:22:38.487 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:38.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.487 --rc genhtml_branch_coverage=1 00:22:38.487 --rc genhtml_function_coverage=1 00:22:38.487 --rc genhtml_legend=1 00:22:38.488 --rc geninfo_all_blocks=1 00:22:38.488 --rc geninfo_unexecuted_blocks=1 00:22:38.488 00:22:38.488 ' 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:38.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.488 --rc genhtml_branch_coverage=1 00:22:38.488 --rc genhtml_function_coverage=1 00:22:38.488 --rc genhtml_legend=1 00:22:38.488 --rc geninfo_all_blocks=1 00:22:38.488 --rc geninfo_unexecuted_blocks=1 00:22:38.488 00:22:38.488 ' 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.488 ************************************ 00:22:38.488 START TEST nvmf_multicontroller 00:22:38.488 ************************************ 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:38.488 * Looking for test storage... 00:22:38.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.488 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:38.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.489 --rc genhtml_branch_coverage=1 00:22:38.489 --rc genhtml_function_coverage=1 00:22:38.489 --rc genhtml_legend=1 00:22:38.489 --rc geninfo_all_blocks=1 00:22:38.489 --rc geninfo_unexecuted_blocks=1 00:22:38.489 00:22:38.489 ' 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:38.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.489 --rc genhtml_branch_coverage=1 00:22:38.489 --rc genhtml_function_coverage=1 00:22:38.489 --rc genhtml_legend=1 00:22:38.489 --rc geninfo_all_blocks=1 00:22:38.489 --rc geninfo_unexecuted_blocks=1 00:22:38.489 00:22:38.489 ' 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:38.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.489 --rc genhtml_branch_coverage=1 00:22:38.489 --rc genhtml_function_coverage=1 00:22:38.489 --rc genhtml_legend=1 00:22:38.489 --rc geninfo_all_blocks=1 00:22:38.489 --rc geninfo_unexecuted_blocks=1 00:22:38.489 00:22:38.489 ' 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:38.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.489 --rc genhtml_branch_coverage=1 00:22:38.489 --rc genhtml_function_coverage=1 00:22:38.489 --rc genhtml_legend=1 00:22:38.489 --rc geninfo_all_blocks=1 00:22:38.489 --rc geninfo_unexecuted_blocks=1 00:22:38.489 00:22:38.489 ' 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.489 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:38.490 08:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:43.759 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.759 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.759 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.759 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.759 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.759 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.759 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.759 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.759 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:43.760 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:43.760 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:43.760 Found net devices under 0000:86:00.0: cvl_0_0 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:43.760 Found net devices under 0000:86:00.1: cvl_0_1 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.760 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.761 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.761 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.761 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.761 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.761 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.761 08:21:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.761 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:22:43.761 00:22:43.761 --- 10.0.0.2 ping statistics --- 00:22:43.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.761 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:22:43.761 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:22:43.761 00:22:43.761 --- 10.0.0.1 ping statistics --- 00:22:43.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.761 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:22:43.761 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.761 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:43.761 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.761 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.761 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.761 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.761 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.761 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.761 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.020 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1431369 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1431369 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1431369 ']' 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.021 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.021 [2024-11-28 08:21:26.099164] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:22:44.021 [2024-11-28 08:21:26.099215] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.021 [2024-11-28 08:21:26.165989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:44.021 [2024-11-28 08:21:26.208985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.021 [2024-11-28 08:21:26.209024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.021 [2024-11-28 08:21:26.209031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.021 [2024-11-28 08:21:26.209037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.021 [2024-11-28 08:21:26.209042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.021 [2024-11-28 08:21:26.210559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.021 [2024-11-28 08:21:26.210645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.021 [2024-11-28 08:21:26.210646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 [2024-11-28 08:21:26.348734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 Malloc0 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 [2024-11-28 08:21:26.411571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 [2024-11-28 08:21:26.419505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 Malloc1 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1431468 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1431468 /var/tmp/bdevperf.sock 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1431468 ']' 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.280 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.538 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.538 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:44.538 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:44.538 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.538 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.538 NVMe0n1 00:22:44.538 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.538 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:44.538 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:44.538 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.538 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.796 1 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.796 request: 00:22:44.796 { 00:22:44.796 "name": "NVMe0", 00:22:44.796 "trtype": "tcp", 00:22:44.796 "traddr": "10.0.0.2", 00:22:44.796 "adrfam": "ipv4", 00:22:44.796 "trsvcid": "4420", 00:22:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.796 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:44.796 "hostaddr": "10.0.0.1", 00:22:44.796 "prchk_reftag": false, 00:22:44.796 "prchk_guard": false, 00:22:44.796 "hdgst": false, 00:22:44.796 "ddgst": false, 00:22:44.796 "allow_unrecognized_csi": false, 00:22:44.796 "method": "bdev_nvme_attach_controller", 00:22:44.796 "req_id": 1 00:22:44.796 } 00:22:44.796 Got JSON-RPC error response 00:22:44.796 response: 00:22:44.796 { 00:22:44.796 "code": -114, 00:22:44.796 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:44.796 } 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.796 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.796 request: 00:22:44.797 { 00:22:44.797 "name": "NVMe0", 00:22:44.797 "trtype": "tcp", 00:22:44.797 "traddr": "10.0.0.2", 00:22:44.797 "adrfam": "ipv4", 00:22:44.797 "trsvcid": "4420", 00:22:44.797 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:44.797 "hostaddr": "10.0.0.1", 00:22:44.797 "prchk_reftag": false, 00:22:44.797 "prchk_guard": false, 00:22:44.797 "hdgst": false, 00:22:44.797 "ddgst": false, 00:22:44.797 "allow_unrecognized_csi": false, 00:22:44.797 "method": "bdev_nvme_attach_controller", 00:22:44.797 "req_id": 1 00:22:44.797 } 00:22:44.797 Got JSON-RPC error response 00:22:44.797 response: 00:22:44.797 { 00:22:44.797 "code": -114, 00:22:44.797 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:44.797 } 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.797 request: 00:22:44.797 { 00:22:44.797 "name": "NVMe0", 00:22:44.797 "trtype": "tcp", 00:22:44.797 "traddr": "10.0.0.2", 00:22:44.797 "adrfam": "ipv4", 00:22:44.797 "trsvcid": "4420", 00:22:44.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.797 "hostaddr": "10.0.0.1", 00:22:44.797 "prchk_reftag": false, 00:22:44.797 "prchk_guard": false, 00:22:44.797 "hdgst": false, 00:22:44.797 "ddgst": false, 00:22:44.797 "multipath": "disable", 00:22:44.797 "allow_unrecognized_csi": false, 00:22:44.797 "method": "bdev_nvme_attach_controller", 00:22:44.797 "req_id": 1 00:22:44.797 } 00:22:44.797 Got JSON-RPC error response 00:22:44.797 response: 00:22:44.797 { 00:22:44.797 "code": -114, 00:22:44.797 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:44.797 } 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:44.797 request: 00:22:44.797 { 00:22:44.797 "name": "NVMe0", 00:22:44.797 "trtype": "tcp", 00:22:44.797 "traddr": "10.0.0.2", 00:22:44.797 "adrfam": "ipv4", 00:22:44.797 "trsvcid": "4420", 00:22:44.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.797 "hostaddr": "10.0.0.1", 00:22:44.797 "prchk_reftag": false, 00:22:44.797 "prchk_guard": false, 00:22:44.797 "hdgst": false, 00:22:44.797 "ddgst": false, 00:22:44.797 "multipath": "failover", 00:22:44.797 "allow_unrecognized_csi": false, 00:22:44.797 "method": "bdev_nvme_attach_controller", 00:22:44.797 "req_id": 1 00:22:44.797 } 00:22:44.797 Got JSON-RPC error response 00:22:44.797 response: 00:22:44.797 { 00:22:44.797 "code": -114, 00:22:44.797 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:44.797 } 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.797 08:21:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.055 NVMe0n1 00:22:45.055 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.055 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:45.055 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.055 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.055 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.055 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:45.055 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.055 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.313 00:22:45.313 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.313 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.313 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:45.313 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.313 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.313 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.313 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:45.313 08:21:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.246 { 00:22:46.246 "results": [ 00:22:46.246 { 00:22:46.246 "job": "NVMe0n1", 00:22:46.246 "core_mask": "0x1", 00:22:46.246 "workload": "write", 00:22:46.246 "status": "finished", 00:22:46.246 "queue_depth": 128, 00:22:46.246 "io_size": 4096, 00:22:46.246 "runtime": 1.006894, 00:22:46.246 "iops": 22742.21516862748, 00:22:46.246 "mibps": 88.8367780024511, 00:22:46.246 "io_failed": 0, 00:22:46.246 "io_timeout": 0, 00:22:46.246 "avg_latency_us": 5610.470640943121, 00:22:46.246 "min_latency_us": 4445.050434782609, 00:22:46.246 "max_latency_us": 15158.761739130436 00:22:46.246 } 00:22:46.246 ], 00:22:46.246 "core_count": 1 00:22:46.246 } 00:22:46.246 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:46.246 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.246 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:46.246 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.246 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:46.246 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1431468 00:22:46.246 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1431468 ']' 00:22:46.246 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1431468 00:22:46.246 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:46.246 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.246 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1431468 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1431468' 00:22:46.505 killing process with pid 1431468 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1431468 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1431468 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:46.505 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:46.506 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:46.506 [2024-11-28 08:21:26.522865] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:22:46.506 [2024-11-28 08:21:26.522919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431468 ] 00:22:46.506 [2024-11-28 08:21:26.586373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.506 [2024-11-28 08:21:26.630195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.506 [2024-11-28 08:21:27.328006] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 8d3129c8-5c00-4c8f-a717-4db31ff0eb53 already exists 00:22:46.506 [2024-11-28 08:21:27.328035] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:8d3129c8-5c00-4c8f-a717-4db31ff0eb53 alias for bdev NVMe1n1 00:22:46.506 [2024-11-28 08:21:27.328044] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:46.506 Running I/O for 1 seconds... 00:22:46.506 22739.00 IOPS, 88.82 MiB/s 00:22:46.506 Latency(us) 00:22:46.506 [2024-11-28T07:21:28.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.506 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:46.506 NVMe0n1 : 1.01 22742.22 88.84 0.00 0.00 5610.47 4445.05 15158.76 00:22:46.506 [2024-11-28T07:21:28.775Z] =================================================================================================================== 00:22:46.506 [2024-11-28T07:21:28.775Z] Total : 22742.22 88.84 0.00 0.00 5610.47 4445.05 15158.76 00:22:46.506 Received shutdown signal, test time was about 1.000000 seconds 00:22:46.506 00:22:46.506 Latency(us) 00:22:46.506 [2024-11-28T07:21:28.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.506 [2024-11-28T07:21:28.775Z] =================================================================================================================== 00:22:46.506 [2024-11-28T07:21:28.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.506 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.506 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.506 rmmod nvme_tcp 00:22:46.506 rmmod nvme_fabrics 00:22:46.506 rmmod nvme_keyring 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1431369 ']' 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1431369 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1431369 ']' 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1431369 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1431369 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1431369' 00:22:46.765 killing process with pid 1431369 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1431369 00:22:46.765 08:21:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1431369 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.024 08:21:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.925 08:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.925 00:22:48.925 real 0m10.640s 00:22:48.925 user 0m12.524s 00:22:48.925 sys 0m4.658s 00:22:48.925 08:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.925 08:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.925 ************************************ 00:22:48.925 END TEST nvmf_multicontroller 00:22:48.925 ************************************ 00:22:48.925 08:21:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:48.925 08:21:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:48.925 08:21:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.925 08:21:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.184 ************************************ 00:22:49.184 START TEST nvmf_aer 00:22:49.184 ************************************ 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:49.184 * Looking for test storage... 00:22:49.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.184 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:49.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.184 --rc genhtml_branch_coverage=1 00:22:49.184 --rc genhtml_function_coverage=1 00:22:49.184 --rc genhtml_legend=1 00:22:49.184 --rc geninfo_all_blocks=1 00:22:49.184 --rc geninfo_unexecuted_blocks=1 00:22:49.184 00:22:49.184 ' 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:49.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.185 --rc genhtml_branch_coverage=1 00:22:49.185 --rc genhtml_function_coverage=1 00:22:49.185 --rc genhtml_legend=1 00:22:49.185 --rc geninfo_all_blocks=1 00:22:49.185 --rc geninfo_unexecuted_blocks=1 00:22:49.185 00:22:49.185 ' 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:49.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.185 --rc genhtml_branch_coverage=1 00:22:49.185 --rc genhtml_function_coverage=1 00:22:49.185 --rc genhtml_legend=1 00:22:49.185 --rc geninfo_all_blocks=1 00:22:49.185 --rc geninfo_unexecuted_blocks=1 00:22:49.185 00:22:49.185 ' 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:49.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.185 --rc genhtml_branch_coverage=1 00:22:49.185 --rc genhtml_function_coverage=1 00:22:49.185 --rc genhtml_legend=1 00:22:49.185 --rc geninfo_all_blocks=1 00:22:49.185 --rc geninfo_unexecuted_blocks=1 00:22:49.185 00:22:49.185 ' 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.185 08:21:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.456 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:54.456 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:54.457 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:54.457 Found net devices under 0000:86:00.0: cvl_0_0 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:54.457 Found net devices under 0000:86:00.1: cvl_0_1 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:54.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:22:54.457 00:22:54.457 --- 10.0.0.2 ping statistics --- 00:22:54.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.457 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:22:54.457 00:22:54.457 --- 10.0.0.1 ping statistics --- 00:22:54.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.457 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1435262 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1435262 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1435262 ']' 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.457 [2024-11-28 08:21:36.373164] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:22:54.457 [2024-11-28 08:21:36.373213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.457 [2024-11-28 08:21:36.440497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.457 [2024-11-28 08:21:36.484832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.457 [2024-11-28 08:21:36.484869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.457 [2024-11-28 08:21:36.484876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.457 [2024-11-28 08:21:36.484882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.457 [2024-11-28 08:21:36.484887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.457 [2024-11-28 08:21:36.486434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.457 [2024-11-28 08:21:36.486530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.457 [2024-11-28 08:21:36.486636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.457 [2024-11-28 08:21:36.486638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.457 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.458 [2024-11-28 08:21:36.624539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.458 Malloc0 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.458 [2024-11-28 08:21:36.688709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.458 [ 00:22:54.458 { 00:22:54.458 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:54.458 "subtype": "Discovery", 00:22:54.458 "listen_addresses": [], 00:22:54.458 "allow_any_host": true, 00:22:54.458 "hosts": [] 00:22:54.458 }, 00:22:54.458 { 00:22:54.458 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.458 "subtype": "NVMe", 00:22:54.458 "listen_addresses": [ 00:22:54.458 { 00:22:54.458 "trtype": "TCP", 00:22:54.458 "adrfam": "IPv4", 00:22:54.458 "traddr": "10.0.0.2", 00:22:54.458 "trsvcid": "4420" 00:22:54.458 } 00:22:54.458 ], 00:22:54.458 "allow_any_host": true, 00:22:54.458 "hosts": [], 00:22:54.458 "serial_number": "SPDK00000000000001", 00:22:54.458 "model_number": "SPDK bdev Controller", 00:22:54.458 "max_namespaces": 2, 00:22:54.458 "min_cntlid": 1, 00:22:54.458 "max_cntlid": 65519, 00:22:54.458 "namespaces": [ 00:22:54.458 { 00:22:54.458 "nsid": 1, 00:22:54.458 "bdev_name": "Malloc0", 00:22:54.458 "name": "Malloc0", 00:22:54.458 "nguid": "E127CA79CABD469AAD28A93E405E9D15", 00:22:54.458 "uuid": "e127ca79-cabd-469a-ad28-a93e405e9d15" 00:22:54.458 } 00:22:54.458 ] 00:22:54.458 } 00:22:54.458 ] 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1435399 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:54.458 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.716 Malloc1 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.716 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.716 Asynchronous Event Request test 00:22:54.716 Attaching to 10.0.0.2 00:22:54.716 Attached to 10.0.0.2 00:22:54.716 Registering asynchronous event callbacks... 00:22:54.716 Starting namespace attribute notice tests for all controllers... 00:22:54.716 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:54.716 aer_cb - Changed Namespace 00:22:54.716 Cleaning up... 00:22:54.716 [ 00:22:54.716 { 00:22:54.716 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:54.975 "subtype": "Discovery", 00:22:54.975 "listen_addresses": [], 00:22:54.975 "allow_any_host": true, 00:22:54.975 "hosts": [] 00:22:54.975 }, 00:22:54.975 { 00:22:54.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.975 "subtype": "NVMe", 00:22:54.975 "listen_addresses": [ 00:22:54.975 { 00:22:54.975 "trtype": "TCP", 00:22:54.975 "adrfam": "IPv4", 00:22:54.975 "traddr": "10.0.0.2", 00:22:54.975 "trsvcid": "4420" 00:22:54.975 } 00:22:54.975 ], 00:22:54.975 "allow_any_host": true, 00:22:54.975 "hosts": [], 00:22:54.975 "serial_number": "SPDK00000000000001", 00:22:54.975 "model_number": "SPDK bdev Controller", 00:22:54.975 "max_namespaces": 2, 00:22:54.975 "min_cntlid": 1, 00:22:54.975 "max_cntlid": 65519, 00:22:54.975 "namespaces": [ 00:22:54.975 { 00:22:54.975 "nsid": 1, 00:22:54.975 "bdev_name": "Malloc0", 00:22:54.975 "name": "Malloc0", 00:22:54.975 "nguid": "E127CA79CABD469AAD28A93E405E9D15", 00:22:54.975 "uuid": "e127ca79-cabd-469a-ad28-a93e405e9d15" 00:22:54.975 }, 00:22:54.975 { 00:22:54.975 "nsid": 2, 00:22:54.975 "bdev_name": "Malloc1", 00:22:54.975 "name": "Malloc1", 00:22:54.975 "nguid": "862D7038CB594A019C29E4B0C65E47FD", 00:22:54.975 "uuid": "862d7038-cb59-4a01-9c29-e4b0c65e47fd" 00:22:54.975 } 00:22:54.975 ] 00:22:54.975 } 00:22:54.975 ] 00:22:54.975 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.975 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1435399 00:22:54.975 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:54.975 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.975 08:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.975 rmmod nvme_tcp 00:22:54.975 rmmod nvme_fabrics 00:22:54.975 rmmod nvme_keyring 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1435262 ']' 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1435262 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1435262 ']' 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1435262 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1435262 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1435262' 00:22:54.975 killing process with pid 1435262 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1435262 00:22:54.975 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1435262 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.235 08:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.139 08:21:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.139 00:22:57.139 real 0m8.193s 00:22:57.139 user 0m4.603s 00:22:57.139 sys 0m4.123s 00:22:57.139 08:21:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.139 08:21:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:57.139 ************************************ 00:22:57.139 END TEST nvmf_aer 00:22:57.139 ************************************ 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.399 ************************************ 00:22:57.399 START TEST nvmf_async_init 00:22:57.399 ************************************ 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:57.399 * Looking for test storage... 00:22:57.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:57.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.399 --rc genhtml_branch_coverage=1 00:22:57.399 --rc genhtml_function_coverage=1 00:22:57.399 --rc genhtml_legend=1 00:22:57.399 --rc geninfo_all_blocks=1 00:22:57.399 --rc geninfo_unexecuted_blocks=1 00:22:57.399 00:22:57.399 ' 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:57.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.399 --rc genhtml_branch_coverage=1 00:22:57.399 --rc genhtml_function_coverage=1 00:22:57.399 --rc genhtml_legend=1 00:22:57.399 --rc geninfo_all_blocks=1 00:22:57.399 --rc geninfo_unexecuted_blocks=1 00:22:57.399 00:22:57.399 ' 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:57.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.399 --rc genhtml_branch_coverage=1 00:22:57.399 --rc genhtml_function_coverage=1 00:22:57.399 --rc genhtml_legend=1 00:22:57.399 --rc geninfo_all_blocks=1 00:22:57.399 --rc geninfo_unexecuted_blocks=1 00:22:57.399 00:22:57.399 ' 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:57.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.399 --rc genhtml_branch_coverage=1 00:22:57.399 --rc genhtml_function_coverage=1 00:22:57.399 --rc genhtml_legend=1 00:22:57.399 --rc geninfo_all_blocks=1 00:22:57.399 --rc geninfo_unexecuted_blocks=1 00:22:57.399 00:22:57.399 ' 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.399 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:57.400 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e9f1ca89d11149d7b09fbb27df1186bf 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.658 08:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:02.929 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:02.930 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.930 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:02.931 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:02.931 Found net devices under 0000:86:00.0: cvl_0_0 00:23:02.931 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:02.932 Found net devices under 0000:86:00.1: cvl_0_1 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:02.932 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.933 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.934 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.934 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.934 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:02.934 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.934 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.934 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.934 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:02.934 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:02.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:23:02.934 00:23:02.934 --- 10.0.0.2 ping statistics --- 00:23:02.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.934 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:23:02.934 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:02.934 00:23:02.934 --- 10.0.0.1 ping statistics --- 00:23:02.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.934 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:02.934 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.934 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1438860 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1438860 00:23:02.935 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1438860 ']' 00:23:02.936 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.936 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.936 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.936 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.936 08:21:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.936 [2024-11-28 08:21:44.961932] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:23:02.936 [2024-11-28 08:21:44.961997] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.936 [2024-11-28 08:21:45.028527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.936 [2024-11-28 08:21:45.067613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.936 [2024-11-28 08:21:45.067654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.936 [2024-11-28 08:21:45.067662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.936 [2024-11-28 08:21:45.067671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.936 [2024-11-28 08:21:45.067676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.936 [2024-11-28 08:21:45.068280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.936 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.936 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:02.936 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.936 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.936 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.203 [2024-11-28 08:21:45.204732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.203 null0 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e9f1ca89d11149d7b09fbb27df1186bf 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.203 [2024-11-28 08:21:45.244992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.203 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.461 nvme0n1 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.461 [ 00:23:03.461 { 00:23:03.461 "name": "nvme0n1", 00:23:03.461 "aliases": [ 00:23:03.461 "e9f1ca89-d111-49d7-b09f-bb27df1186bf" 00:23:03.461 ], 00:23:03.461 "product_name": "NVMe disk", 00:23:03.461 "block_size": 512, 00:23:03.461 "num_blocks": 2097152, 00:23:03.461 "uuid": "e9f1ca89-d111-49d7-b09f-bb27df1186bf", 00:23:03.461 "numa_id": 1, 00:23:03.461 "assigned_rate_limits": { 00:23:03.461 "rw_ios_per_sec": 0, 00:23:03.461 "rw_mbytes_per_sec": 0, 00:23:03.461 "r_mbytes_per_sec": 0, 00:23:03.461 "w_mbytes_per_sec": 0 00:23:03.461 }, 00:23:03.461 "claimed": false, 00:23:03.461 "zoned": false, 00:23:03.461 "supported_io_types": { 00:23:03.461 "read": true, 00:23:03.461 "write": true, 00:23:03.461 "unmap": false, 00:23:03.461 "flush": true, 00:23:03.461 "reset": true, 00:23:03.461 "nvme_admin": true, 00:23:03.461 "nvme_io": true, 00:23:03.461 "nvme_io_md": false, 00:23:03.461 "write_zeroes": true, 00:23:03.461 "zcopy": false, 00:23:03.461 "get_zone_info": false, 00:23:03.461 "zone_management": false, 00:23:03.461 "zone_append": false, 00:23:03.461 "compare": true, 00:23:03.461 "compare_and_write": true, 00:23:03.461 "abort": true, 00:23:03.461 "seek_hole": false, 00:23:03.461 "seek_data": false, 00:23:03.461 "copy": true, 00:23:03.461 "nvme_iov_md": false 00:23:03.461 }, 00:23:03.461 "memory_domains": [ 00:23:03.461 { 00:23:03.461 "dma_device_id": "system", 00:23:03.461 "dma_device_type": 1 00:23:03.461 } 00:23:03.461 ], 00:23:03.461 "driver_specific": { 00:23:03.461 "nvme": [ 00:23:03.461 { 00:23:03.461 "trid": { 00:23:03.461 "trtype": "TCP", 00:23:03.461 "adrfam": "IPv4", 00:23:03.461 "traddr": "10.0.0.2", 00:23:03.461 "trsvcid": "4420", 00:23:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:03.461 }, 00:23:03.461 "ctrlr_data": { 00:23:03.461 "cntlid": 1, 00:23:03.461 "vendor_id": "0x8086", 00:23:03.461 "model_number": "SPDK bdev Controller", 00:23:03.461 "serial_number": "00000000000000000000", 00:23:03.461 "firmware_revision": "25.01", 00:23:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:03.461 "oacs": { 00:23:03.461 "security": 0, 00:23:03.461 "format": 0, 00:23:03.461 "firmware": 0, 00:23:03.461 "ns_manage": 0 00:23:03.461 }, 00:23:03.461 "multi_ctrlr": true, 00:23:03.461 "ana_reporting": false 00:23:03.461 }, 00:23:03.461 "vs": { 00:23:03.461 "nvme_version": "1.3" 00:23:03.461 }, 00:23:03.461 "ns_data": { 00:23:03.461 "id": 1, 00:23:03.461 "can_share": true 00:23:03.461 } 00:23:03.461 } 00:23:03.461 ], 00:23:03.461 "mp_policy": "active_passive" 00:23:03.461 } 00:23:03.461 } 00:23:03.461 ] 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.461 [2024-11-28 08:21:45.493466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:03.461 [2024-11-28 08:21:45.493521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x788e20 (9): Bad file descriptor 00:23:03.461 [2024-11-28 08:21:45.625026] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.461 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.461 [ 00:23:03.461 { 00:23:03.461 "name": "nvme0n1", 00:23:03.461 "aliases": [ 00:23:03.461 "e9f1ca89-d111-49d7-b09f-bb27df1186bf" 00:23:03.462 ], 00:23:03.462 "product_name": "NVMe disk", 00:23:03.462 "block_size": 512, 00:23:03.462 "num_blocks": 2097152, 00:23:03.462 "uuid": "e9f1ca89-d111-49d7-b09f-bb27df1186bf", 00:23:03.462 "numa_id": 1, 00:23:03.462 "assigned_rate_limits": { 00:23:03.462 "rw_ios_per_sec": 0, 00:23:03.462 "rw_mbytes_per_sec": 0, 00:23:03.462 "r_mbytes_per_sec": 0, 00:23:03.462 "w_mbytes_per_sec": 0 00:23:03.462 }, 00:23:03.462 "claimed": false, 00:23:03.462 "zoned": false, 00:23:03.462 "supported_io_types": { 00:23:03.462 "read": true, 00:23:03.462 "write": true, 00:23:03.462 "unmap": false, 00:23:03.462 "flush": true, 00:23:03.462 "reset": true, 00:23:03.462 "nvme_admin": true, 00:23:03.462 "nvme_io": true, 00:23:03.462 "nvme_io_md": false, 00:23:03.462 "write_zeroes": true, 00:23:03.462 "zcopy": false, 00:23:03.462 "get_zone_info": false, 00:23:03.462 "zone_management": false, 00:23:03.462 "zone_append": false, 00:23:03.462 "compare": true, 00:23:03.462 "compare_and_write": true, 00:23:03.462 "abort": true, 00:23:03.462 "seek_hole": false, 00:23:03.462 "seek_data": false, 00:23:03.462 "copy": true, 00:23:03.462 "nvme_iov_md": false 00:23:03.462 }, 00:23:03.462 "memory_domains": [ 00:23:03.462 { 00:23:03.462 "dma_device_id": "system", 00:23:03.462 "dma_device_type": 1 00:23:03.462 } 00:23:03.462 ], 00:23:03.462 "driver_specific": { 00:23:03.462 "nvme": [ 00:23:03.462 { 00:23:03.462 "trid": { 00:23:03.462 "trtype": "TCP", 00:23:03.462 "adrfam": "IPv4", 00:23:03.462 "traddr": "10.0.0.2", 00:23:03.462 "trsvcid": "4420", 00:23:03.462 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:03.462 }, 00:23:03.462 "ctrlr_data": { 00:23:03.462 "cntlid": 2, 00:23:03.462 "vendor_id": "0x8086", 00:23:03.462 "model_number": "SPDK bdev Controller", 00:23:03.462 "serial_number": "00000000000000000000", 00:23:03.462 "firmware_revision": "25.01", 00:23:03.462 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:03.462 "oacs": { 00:23:03.462 "security": 0, 00:23:03.462 "format": 0, 00:23:03.462 "firmware": 0, 00:23:03.462 "ns_manage": 0 00:23:03.462 }, 00:23:03.462 "multi_ctrlr": true, 00:23:03.462 "ana_reporting": false 00:23:03.462 }, 00:23:03.462 "vs": { 00:23:03.462 "nvme_version": "1.3" 00:23:03.462 }, 00:23:03.462 "ns_data": { 00:23:03.462 "id": 1, 00:23:03.462 "can_share": true 00:23:03.462 } 00:23:03.462 } 00:23:03.462 ], 00:23:03.462 "mp_policy": "active_passive" 00:23:03.462 } 00:23:03.462 } 00:23:03.462 ] 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.F8b81ZtUMb 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.F8b81ZtUMb 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.F8b81ZtUMb 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.462 [2024-11-28 08:21:45.678039] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:03.462 [2024-11-28 08:21:45.678134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.462 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.462 [2024-11-28 08:21:45.694078] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:03.721 nvme0n1 00:23:03.721 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.721 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:03.721 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.721 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.721 [ 00:23:03.721 { 00:23:03.721 "name": "nvme0n1", 00:23:03.721 "aliases": [ 00:23:03.721 "e9f1ca89-d111-49d7-b09f-bb27df1186bf" 00:23:03.721 ], 00:23:03.721 "product_name": "NVMe disk", 00:23:03.721 "block_size": 512, 00:23:03.721 "num_blocks": 2097152, 00:23:03.721 "uuid": "e9f1ca89-d111-49d7-b09f-bb27df1186bf", 00:23:03.721 "numa_id": 1, 00:23:03.721 "assigned_rate_limits": { 00:23:03.721 "rw_ios_per_sec": 0, 00:23:03.721 "rw_mbytes_per_sec": 0, 00:23:03.721 "r_mbytes_per_sec": 0, 00:23:03.721 "w_mbytes_per_sec": 0 00:23:03.721 }, 00:23:03.721 "claimed": false, 00:23:03.721 "zoned": false, 00:23:03.721 "supported_io_types": { 00:23:03.721 "read": true, 00:23:03.721 "write": true, 00:23:03.721 "unmap": false, 00:23:03.721 "flush": true, 00:23:03.721 "reset": true, 00:23:03.721 "nvme_admin": true, 00:23:03.721 "nvme_io": true, 00:23:03.721 "nvme_io_md": false, 00:23:03.721 "write_zeroes": true, 00:23:03.721 "zcopy": false, 00:23:03.721 "get_zone_info": false, 00:23:03.721 "zone_management": false, 00:23:03.721 "zone_append": false, 00:23:03.721 "compare": true, 00:23:03.721 "compare_and_write": true, 00:23:03.721 "abort": true, 00:23:03.721 "seek_hole": false, 00:23:03.721 "seek_data": false, 00:23:03.721 "copy": true, 00:23:03.721 "nvme_iov_md": false 00:23:03.721 }, 00:23:03.721 "memory_domains": [ 00:23:03.721 { 00:23:03.721 "dma_device_id": "system", 00:23:03.721 "dma_device_type": 1 00:23:03.721 } 00:23:03.721 ], 00:23:03.721 "driver_specific": { 00:23:03.721 "nvme": [ 00:23:03.721 { 00:23:03.721 "trid": { 00:23:03.721 "trtype": "TCP", 00:23:03.721 "adrfam": "IPv4", 00:23:03.721 "traddr": "10.0.0.2", 00:23:03.721 "trsvcid": "4421", 00:23:03.721 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:03.721 }, 00:23:03.721 "ctrlr_data": { 00:23:03.721 "cntlid": 3, 00:23:03.721 "vendor_id": "0x8086", 00:23:03.721 "model_number": "SPDK bdev Controller", 00:23:03.721 "serial_number": "00000000000000000000", 00:23:03.721 "firmware_revision": "25.01", 00:23:03.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:03.721 "oacs": { 00:23:03.721 "security": 0, 00:23:03.721 "format": 0, 00:23:03.721 "firmware": 0, 00:23:03.721 "ns_manage": 0 00:23:03.721 }, 00:23:03.721 "multi_ctrlr": true, 00:23:03.721 "ana_reporting": false 00:23:03.721 }, 00:23:03.721 "vs": { 00:23:03.721 "nvme_version": "1.3" 00:23:03.721 }, 00:23:03.721 "ns_data": { 00:23:03.721 "id": 1, 00:23:03.721 "can_share": true 00:23:03.721 } 00:23:03.721 } 00:23:03.721 ], 00:23:03.721 "mp_policy": "active_passive" 00:23:03.721 } 00:23:03.721 } 00:23:03.721 ] 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.F8b81ZtUMb 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:03.722 rmmod nvme_tcp 00:23:03.722 rmmod nvme_fabrics 00:23:03.722 rmmod nvme_keyring 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1438860 ']' 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1438860 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1438860 ']' 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1438860 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1438860 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1438860' 00:23:03.722 killing process with pid 1438860 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1438860 00:23:03.722 08:21:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1438860 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.981 08:21:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.885 08:21:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.885 00:23:05.885 real 0m8.632s 00:23:05.885 user 0m2.624s 00:23:05.885 sys 0m4.367s 00:23:05.885 08:21:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.885 08:21:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:05.885 ************************************ 00:23:05.885 END TEST nvmf_async_init 00:23:05.885 ************************************ 00:23:05.886 08:21:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:05.886 08:21:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.886 08:21:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.886 08:21:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.145 ************************************ 00:23:06.145 START TEST dma 00:23:06.145 ************************************ 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:06.145 * Looking for test storage... 00:23:06.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:06.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.145 --rc genhtml_branch_coverage=1 00:23:06.145 --rc genhtml_function_coverage=1 00:23:06.145 --rc genhtml_legend=1 00:23:06.145 --rc geninfo_all_blocks=1 00:23:06.145 --rc geninfo_unexecuted_blocks=1 00:23:06.145 00:23:06.145 ' 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:06.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.145 --rc genhtml_branch_coverage=1 00:23:06.145 --rc genhtml_function_coverage=1 00:23:06.145 --rc genhtml_legend=1 00:23:06.145 --rc geninfo_all_blocks=1 00:23:06.145 --rc geninfo_unexecuted_blocks=1 00:23:06.145 00:23:06.145 ' 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:06.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.145 --rc genhtml_branch_coverage=1 00:23:06.145 --rc genhtml_function_coverage=1 00:23:06.145 --rc genhtml_legend=1 00:23:06.145 --rc geninfo_all_blocks=1 00:23:06.145 --rc geninfo_unexecuted_blocks=1 00:23:06.145 00:23:06.145 ' 00:23:06.145 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:06.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.145 --rc genhtml_branch_coverage=1 00:23:06.145 --rc genhtml_function_coverage=1 00:23:06.145 --rc genhtml_legend=1 00:23:06.145 --rc geninfo_all_blocks=1 00:23:06.145 --rc geninfo_unexecuted_blocks=1 00:23:06.145 00:23:06.145 ' 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:06.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:06.146 00:23:06.146 real 0m0.183s 00:23:06.146 user 0m0.109s 00:23:06.146 sys 0m0.086s 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:06.146 ************************************ 00:23:06.146 END TEST dma 00:23:06.146 ************************************ 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.146 08:21:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.146 ************************************ 00:23:06.146 START TEST nvmf_identify 00:23:06.146 ************************************ 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:06.406 * Looking for test storage... 00:23:06.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:06.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.406 --rc genhtml_branch_coverage=1 00:23:06.406 --rc genhtml_function_coverage=1 00:23:06.406 --rc genhtml_legend=1 00:23:06.406 --rc geninfo_all_blocks=1 00:23:06.406 --rc geninfo_unexecuted_blocks=1 00:23:06.406 00:23:06.406 ' 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:06.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.406 --rc genhtml_branch_coverage=1 00:23:06.406 --rc genhtml_function_coverage=1 00:23:06.406 --rc genhtml_legend=1 00:23:06.406 --rc geninfo_all_blocks=1 00:23:06.406 --rc geninfo_unexecuted_blocks=1 00:23:06.406 00:23:06.406 ' 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:06.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.406 --rc genhtml_branch_coverage=1 00:23:06.406 --rc genhtml_function_coverage=1 00:23:06.406 --rc genhtml_legend=1 00:23:06.406 --rc geninfo_all_blocks=1 00:23:06.406 --rc geninfo_unexecuted_blocks=1 00:23:06.406 00:23:06.406 ' 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:06.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.406 --rc genhtml_branch_coverage=1 00:23:06.406 --rc genhtml_function_coverage=1 00:23:06.406 --rc genhtml_legend=1 00:23:06.406 --rc geninfo_all_blocks=1 00:23:06.406 --rc geninfo_unexecuted_blocks=1 00:23:06.406 00:23:06.406 ' 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:06.406 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:06.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.407 08:21:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:11.680 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:11.681 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:11.681 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:11.681 Found net devices under 0000:86:00.0: cvl_0_0 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:11.681 Found net devices under 0000:86:00.1: cvl_0_1 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.681 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.940 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.940 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.940 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:11.940 08:21:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.940 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.940 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.940 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:11.940 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:11.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:23:11.940 00:23:11.940 --- 10.0.0.2 ping statistics --- 00:23:11.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.940 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:23:11.941 00:23:11.941 --- 10.0.0.1 ping statistics --- 00:23:11.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.941 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1442520 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1442520 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1442520 ']' 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.941 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.941 [2024-11-28 08:21:54.185992] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:23:11.941 [2024-11-28 08:21:54.186041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.200 [2024-11-28 08:21:54.252165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.200 [2024-11-28 08:21:54.294814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.200 [2024-11-28 08:21:54.294853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.200 [2024-11-28 08:21:54.294860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.200 [2024-11-28 08:21:54.294866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.200 [2024-11-28 08:21:54.294871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.200 [2024-11-28 08:21:54.296486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.200 [2024-11-28 08:21:54.296585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.200 [2024-11-28 08:21:54.296670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.200 [2024-11-28 08:21:54.296671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.200 [2024-11-28 08:21:54.407235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.200 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 Malloc0 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 [2024-11-28 08:21:54.506687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.464 [ 00:23:12.464 { 00:23:12.464 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:12.464 "subtype": "Discovery", 00:23:12.464 "listen_addresses": [ 00:23:12.464 { 00:23:12.464 "trtype": "TCP", 00:23:12.464 "adrfam": "IPv4", 00:23:12.464 "traddr": "10.0.0.2", 00:23:12.464 "trsvcid": "4420" 00:23:12.464 } 00:23:12.464 ], 00:23:12.464 "allow_any_host": true, 00:23:12.464 "hosts": [] 00:23:12.464 }, 00:23:12.464 { 00:23:12.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.464 "subtype": "NVMe", 00:23:12.464 "listen_addresses": [ 00:23:12.464 { 00:23:12.464 "trtype": "TCP", 00:23:12.464 "adrfam": "IPv4", 00:23:12.464 "traddr": "10.0.0.2", 00:23:12.464 "trsvcid": "4420" 00:23:12.464 } 00:23:12.464 ], 00:23:12.464 "allow_any_host": true, 00:23:12.464 "hosts": [], 00:23:12.464 "serial_number": "SPDK00000000000001", 00:23:12.464 "model_number": "SPDK bdev Controller", 00:23:12.464 "max_namespaces": 32, 00:23:12.464 "min_cntlid": 1, 00:23:12.464 "max_cntlid": 65519, 00:23:12.464 "namespaces": [ 00:23:12.464 { 00:23:12.464 "nsid": 1, 00:23:12.464 "bdev_name": "Malloc0", 00:23:12.464 "name": "Malloc0", 00:23:12.464 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:12.464 "eui64": "ABCDEF0123456789", 00:23:12.464 "uuid": "581cf719-77fc-44e3-bf62-3dc3ca2d1c4b" 00:23:12.464 } 00:23:12.464 ] 00:23:12.464 } 00:23:12.464 ] 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.464 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:12.464 [2024-11-28 08:21:54.560417] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:23:12.464 [2024-11-28 08:21:54.560462] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442615 ] 00:23:12.464 [2024-11-28 08:21:54.602798] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:12.464 [2024-11-28 08:21:54.602850] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:12.464 [2024-11-28 08:21:54.602856] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:12.464 [2024-11-28 08:21:54.602874] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:12.464 [2024-11-28 08:21:54.602884] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:12.464 [2024-11-28 08:21:54.606260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:12.464 [2024-11-28 08:21:54.606295] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd50690 0 00:23:12.464 [2024-11-28 08:21:54.613955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:12.464 [2024-11-28 08:21:54.613970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:12.464 [2024-11-28 08:21:54.613976] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:12.464 [2024-11-28 08:21:54.613981] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:12.464 [2024-11-28 08:21:54.614015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.464 [2024-11-28 08:21:54.614020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.464 [2024-11-28 08:21:54.614024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50690) 00:23:12.464 [2024-11-28 08:21:54.614036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:12.464 [2024-11-28 08:21:54.614055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:23:12.464 [2024-11-28 08:21:54.620957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.464 [2024-11-28 08:21:54.620967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.464 [2024-11-28 08:21:54.620970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.464 [2024-11-28 08:21:54.620974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50690 00:23:12.464 [2024-11-28 08:21:54.620984] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:12.464 [2024-11-28 08:21:54.620991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:12.464 [2024-11-28 08:21:54.620995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:12.464 [2024-11-28 08:21:54.621010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.464 [2024-11-28 08:21:54.621014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.464 [2024-11-28 08:21:54.621017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50690) 00:23:12.464 [2024-11-28 08:21:54.621024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.464 [2024-11-28 08:21:54.621037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:23:12.464 [2024-11-28 08:21:54.621222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.464 [2024-11-28 08:21:54.621228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.464 [2024-11-28 08:21:54.621230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.464 [2024-11-28 08:21:54.621234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50690 00:23:12.464 [2024-11-28 08:21:54.621241] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:12.464 [2024-11-28 08:21:54.621248] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:12.464 [2024-11-28 08:21:54.621255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.464 [2024-11-28 08:21:54.621258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.464 [2024-11-28 08:21:54.621261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50690) 00:23:12.464 [2024-11-28 08:21:54.621270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.464 [2024-11-28 08:21:54.621280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:23:12.464 [2024-11-28 08:21:54.621364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.464 [2024-11-28 08:21:54.621370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.464 [2024-11-28 08:21:54.621373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.464 [2024-11-28 08:21:54.621377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50690 00:23:12.464 [2024-11-28 08:21:54.621381] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:12.464 [2024-11-28 08:21:54.621388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:12.464 [2024-11-28 08:21:54.621394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.464 [2024-11-28 08:21:54.621397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.464 [2024-11-28 08:21:54.621400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50690) 00:23:12.464 [2024-11-28 08:21:54.621406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.465 [2024-11-28 08:21:54.621415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:23:12.465 [2024-11-28 08:21:54.621481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.465 [2024-11-28 08:21:54.621487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.465 [2024-11-28 08:21:54.621490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.621493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50690 00:23:12.465 [2024-11-28 08:21:54.621497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:12.465 [2024-11-28 08:21:54.621507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.621510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.621514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50690) 00:23:12.465 [2024-11-28 08:21:54.621519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.465 [2024-11-28 08:21:54.621528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:23:12.465 [2024-11-28 08:21:54.621608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.465 [2024-11-28 08:21:54.621613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.465 [2024-11-28 08:21:54.621617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.621620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50690 00:23:12.465 [2024-11-28 08:21:54.621624] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:12.465 [2024-11-28 08:21:54.621628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:12.465 [2024-11-28 08:21:54.621635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:12.465 [2024-11-28 08:21:54.621743] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:12.465 [2024-11-28 08:21:54.621748] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:12.465 [2024-11-28 08:21:54.621759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.621762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.621765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50690) 00:23:12.465 [2024-11-28 08:21:54.621771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.465 [2024-11-28 08:21:54.621781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:23:12.465 [2024-11-28 08:21:54.621861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.465 [2024-11-28 08:21:54.621867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.465 [2024-11-28 08:21:54.621869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.621873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50690 00:23:12.465 [2024-11-28 08:21:54.621877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:12.465 [2024-11-28 08:21:54.621886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.621890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.621893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50690) 00:23:12.465 [2024-11-28 08:21:54.621898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.465 [2024-11-28 08:21:54.621908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:23:12.465 [2024-11-28 08:21:54.622001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.465 [2024-11-28 08:21:54.622007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.465 [2024-11-28 08:21:54.622010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50690 00:23:12.465 [2024-11-28 08:21:54.622017] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:12.465 [2024-11-28 08:21:54.622022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:12.465 [2024-11-28 08:21:54.622028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:12.465 [2024-11-28 08:21:54.622036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:12.465 [2024-11-28 08:21:54.622044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50690) 00:23:12.465 [2024-11-28 08:21:54.622054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.465 [2024-11-28 08:21:54.622064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:23:12.465 [2024-11-28 08:21:54.622155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.465 [2024-11-28 08:21:54.622161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.465 [2024-11-28 08:21:54.622164] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622167] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd50690): datao=0, datal=4096, cccid=0 00:23:12.465 [2024-11-28 08:21:54.622172] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb2100) on tqpair(0xd50690): expected_datao=0, payload_size=4096 00:23:12.465 [2024-11-28 08:21:54.622177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622202] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622207] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.465 [2024-11-28 08:21:54.622255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.465 [2024-11-28 08:21:54.622258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50690 00:23:12.465 [2024-11-28 08:21:54.622269] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:12.465 [2024-11-28 08:21:54.622273] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:12.465 [2024-11-28 08:21:54.622277] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:12.465 [2024-11-28 08:21:54.622281] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:12.465 [2024-11-28 08:21:54.622285] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:12.465 [2024-11-28 08:21:54.622290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:12.465 [2024-11-28 08:21:54.622297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:12.465 [2024-11-28 08:21:54.622302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50690) 00:23:12.465 [2024-11-28 08:21:54.622315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:12.465 [2024-11-28 08:21:54.622325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:23:12.465 [2024-11-28 08:21:54.622393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.465 [2024-11-28 08:21:54.622399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.465 [2024-11-28 08:21:54.622402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50690 00:23:12.465 [2024-11-28 08:21:54.622412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd50690) 00:23:12.465 [2024-11-28 08:21:54.622423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.465 [2024-11-28 08:21:54.622429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd50690) 00:23:12.465 [2024-11-28 08:21:54.622440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.465 [2024-11-28 08:21:54.622445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd50690) 00:23:12.465 [2024-11-28 08:21:54.622456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.465 [2024-11-28 08:21:54.622463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.465 [2024-11-28 08:21:54.622475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.465 [2024-11-28 08:21:54.622479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:12.465 [2024-11-28 08:21:54.622489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:12.465 [2024-11-28 08:21:54.622495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.465 [2024-11-28 08:21:54.622498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd50690) 00:23:12.465 [2024-11-28 08:21:54.622504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.465 [2024-11-28 08:21:54.622514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2100, cid 0, qid 0 00:23:12.466 [2024-11-28 08:21:54.622521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2280, cid 1, qid 0 00:23:12.466 [2024-11-28 08:21:54.622525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2400, cid 2, qid 0 00:23:12.466 [2024-11-28 08:21:54.622529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.466 [2024-11-28 08:21:54.622533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2700, cid 4, qid 0 00:23:12.466 [2024-11-28 08:21:54.622651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.466 [2024-11-28 08:21:54.622657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.466 [2024-11-28 08:21:54.622660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.622663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2700) on tqpair=0xd50690 00:23:12.466 [2024-11-28 08:21:54.622667] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:12.466 [2024-11-28 08:21:54.622672] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:12.466 [2024-11-28 08:21:54.622681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.622685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd50690) 00:23:12.466 [2024-11-28 08:21:54.622690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.466 [2024-11-28 08:21:54.622700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2700, cid 4, qid 0 00:23:12.466 [2024-11-28 08:21:54.622778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.466 [2024-11-28 08:21:54.622784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.466 [2024-11-28 08:21:54.622787] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.622790] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd50690): datao=0, datal=4096, cccid=4 00:23:12.466 [2024-11-28 08:21:54.622794] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb2700) on tqpair(0xd50690): expected_datao=0, payload_size=4096 00:23:12.466 [2024-11-28 08:21:54.622798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.622804] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.622807] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.622851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.466 [2024-11-28 08:21:54.622858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.466 [2024-11-28 08:21:54.622862] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.622865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2700) on tqpair=0xd50690 00:23:12.466 [2024-11-28 08:21:54.622875] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:12.466 [2024-11-28 08:21:54.622894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.622898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd50690) 00:23:12.466 [2024-11-28 08:21:54.622904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.466 [2024-11-28 08:21:54.622910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.622913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.622916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd50690) 00:23:12.466 [2024-11-28 08:21:54.622922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.466 [2024-11-28 08:21:54.622935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2700, cid 4, qid 0 00:23:12.466 [2024-11-28 08:21:54.622940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2880, cid 5, qid 0 00:23:12.466 [2024-11-28 08:21:54.623059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.466 [2024-11-28 08:21:54.623066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.466 [2024-11-28 08:21:54.623068] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.623072] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd50690): datao=0, datal=1024, cccid=4 00:23:12.466 [2024-11-28 08:21:54.623075] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb2700) on tqpair(0xd50690): expected_datao=0, payload_size=1024 00:23:12.466 [2024-11-28 08:21:54.623079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.623085] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.623090] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.623094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.466 [2024-11-28 08:21:54.623099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.466 [2024-11-28 08:21:54.623102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.623106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2880) on tqpair=0xd50690 00:23:12.466 [2024-11-28 08:21:54.667958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.466 [2024-11-28 08:21:54.667970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.466 [2024-11-28 08:21:54.667974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.667977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2700) on tqpair=0xd50690 00:23:12.466 [2024-11-28 08:21:54.667990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.667994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd50690) 00:23:12.466 [2024-11-28 08:21:54.668001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.466 [2024-11-28 08:21:54.668018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2700, cid 4, qid 0 00:23:12.466 [2024-11-28 08:21:54.668193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.466 [2024-11-28 08:21:54.668199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.466 [2024-11-28 08:21:54.668202] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.668208] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd50690): datao=0, datal=3072, cccid=4 00:23:12.466 [2024-11-28 08:21:54.668212] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb2700) on tqpair(0xd50690): expected_datao=0, payload_size=3072 00:23:12.466 [2024-11-28 08:21:54.668216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.668241] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.668244] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.668327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.466 [2024-11-28 08:21:54.668332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.466 [2024-11-28 08:21:54.668336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.668339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2700) on tqpair=0xd50690 00:23:12.466 [2024-11-28 08:21:54.668347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.668351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd50690) 00:23:12.466 [2024-11-28 08:21:54.668356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.466 [2024-11-28 08:21:54.668370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2700, cid 4, qid 0 00:23:12.466 [2024-11-28 08:21:54.668445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.466 [2024-11-28 08:21:54.668451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.466 [2024-11-28 08:21:54.668454] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.668457] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd50690): datao=0, datal=8, cccid=4 00:23:12.466 [2024-11-28 08:21:54.668461] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdb2700) on tqpair(0xd50690): expected_datao=0, payload_size=8 00:23:12.466 [2024-11-28 08:21:54.668465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.668470] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.668473] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.709077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.466 [2024-11-28 08:21:54.709088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.466 [2024-11-28 08:21:54.709092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.466 [2024-11-28 08:21:54.709095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2700) on tqpair=0xd50690 00:23:12.466 ===================================================== 00:23:12.466 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:12.466 ===================================================== 00:23:12.466 Controller Capabilities/Features 00:23:12.466 ================================ 00:23:12.466 Vendor ID: 0000 00:23:12.466 Subsystem Vendor ID: 0000 00:23:12.466 Serial Number: .................... 00:23:12.466 Model Number: ........................................ 00:23:12.466 Firmware Version: 25.01 00:23:12.466 Recommended Arb Burst: 0 00:23:12.466 IEEE OUI Identifier: 00 00 00 00:23:12.466 Multi-path I/O 00:23:12.466 May have multiple subsystem ports: No 00:23:12.466 May have multiple controllers: No 00:23:12.466 Associated with SR-IOV VF: No 00:23:12.466 Max Data Transfer Size: 131072 00:23:12.466 Max Number of Namespaces: 0 00:23:12.466 Max Number of I/O Queues: 1024 00:23:12.466 NVMe Specification Version (VS): 1.3 00:23:12.466 NVMe Specification Version (Identify): 1.3 00:23:12.466 Maximum Queue Entries: 128 00:23:12.466 Contiguous Queues Required: Yes 00:23:12.466 Arbitration Mechanisms Supported 00:23:12.466 Weighted Round Robin: Not Supported 00:23:12.466 Vendor Specific: Not Supported 00:23:12.466 Reset Timeout: 15000 ms 00:23:12.466 Doorbell Stride: 4 bytes 00:23:12.466 NVM Subsystem Reset: Not Supported 00:23:12.466 Command Sets Supported 00:23:12.466 NVM Command Set: Supported 00:23:12.466 Boot Partition: Not Supported 00:23:12.466 Memory Page Size Minimum: 4096 bytes 00:23:12.466 Memory Page Size Maximum: 4096 bytes 00:23:12.466 Persistent Memory Region: Not Supported 00:23:12.466 Optional Asynchronous Events Supported 00:23:12.466 Namespace Attribute Notices: Not Supported 00:23:12.466 Firmware Activation Notices: Not Supported 00:23:12.467 ANA Change Notices: Not Supported 00:23:12.467 PLE Aggregate Log Change Notices: Not Supported 00:23:12.467 LBA Status Info Alert Notices: Not Supported 00:23:12.467 EGE Aggregate Log Change Notices: Not Supported 00:23:12.467 Normal NVM Subsystem Shutdown event: Not Supported 00:23:12.467 Zone Descriptor Change Notices: Not Supported 00:23:12.467 Discovery Log Change Notices: Supported 00:23:12.467 Controller Attributes 00:23:12.467 128-bit Host Identifier: Not Supported 00:23:12.467 Non-Operational Permissive Mode: Not Supported 00:23:12.467 NVM Sets: Not Supported 00:23:12.467 Read Recovery Levels: Not Supported 00:23:12.467 Endurance Groups: Not Supported 00:23:12.467 Predictable Latency Mode: Not Supported 00:23:12.467 Traffic Based Keep ALive: Not Supported 00:23:12.467 Namespace Granularity: Not Supported 00:23:12.467 SQ Associations: Not Supported 00:23:12.467 UUID List: Not Supported 00:23:12.467 Multi-Domain Subsystem: Not Supported 00:23:12.467 Fixed Capacity Management: Not Supported 00:23:12.467 Variable Capacity Management: Not Supported 00:23:12.467 Delete Endurance Group: Not Supported 00:23:12.467 Delete NVM Set: Not Supported 00:23:12.467 Extended LBA Formats Supported: Not Supported 00:23:12.467 Flexible Data Placement Supported: Not Supported 00:23:12.467 00:23:12.467 Controller Memory Buffer Support 00:23:12.467 ================================ 00:23:12.467 Supported: No 00:23:12.467 00:23:12.467 Persistent Memory Region Support 00:23:12.467 ================================ 00:23:12.467 Supported: No 00:23:12.467 00:23:12.467 Admin Command Set Attributes 00:23:12.467 ============================ 00:23:12.467 Security Send/Receive: Not Supported 00:23:12.467 Format NVM: Not Supported 00:23:12.467 Firmware Activate/Download: Not Supported 00:23:12.467 Namespace Management: Not Supported 00:23:12.467 Device Self-Test: Not Supported 00:23:12.467 Directives: Not Supported 00:23:12.467 NVMe-MI: Not Supported 00:23:12.467 Virtualization Management: Not Supported 00:23:12.467 Doorbell Buffer Config: Not Supported 00:23:12.467 Get LBA Status Capability: Not Supported 00:23:12.467 Command & Feature Lockdown Capability: Not Supported 00:23:12.467 Abort Command Limit: 1 00:23:12.467 Async Event Request Limit: 4 00:23:12.467 Number of Firmware Slots: N/A 00:23:12.467 Firmware Slot 1 Read-Only: N/A 00:23:12.467 Firmware Activation Without Reset: N/A 00:23:12.467 Multiple Update Detection Support: N/A 00:23:12.467 Firmware Update Granularity: No Information Provided 00:23:12.467 Per-Namespace SMART Log: No 00:23:12.467 Asymmetric Namespace Access Log Page: Not Supported 00:23:12.467 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:12.467 Command Effects Log Page: Not Supported 00:23:12.467 Get Log Page Extended Data: Supported 00:23:12.467 Telemetry Log Pages: Not Supported 00:23:12.467 Persistent Event Log Pages: Not Supported 00:23:12.467 Supported Log Pages Log Page: May Support 00:23:12.467 Commands Supported & Effects Log Page: Not Supported 00:23:12.467 Feature Identifiers & Effects Log Page:May Support 00:23:12.467 NVMe-MI Commands & Effects Log Page: May Support 00:23:12.467 Data Area 4 for Telemetry Log: Not Supported 00:23:12.467 Error Log Page Entries Supported: 128 00:23:12.467 Keep Alive: Not Supported 00:23:12.467 00:23:12.467 NVM Command Set Attributes 00:23:12.467 ========================== 00:23:12.467 Submission Queue Entry Size 00:23:12.467 Max: 1 00:23:12.467 Min: 1 00:23:12.467 Completion Queue Entry Size 00:23:12.467 Max: 1 00:23:12.467 Min: 1 00:23:12.467 Number of Namespaces: 0 00:23:12.467 Compare Command: Not Supported 00:23:12.467 Write Uncorrectable Command: Not Supported 00:23:12.467 Dataset Management Command: Not Supported 00:23:12.467 Write Zeroes Command: Not Supported 00:23:12.467 Set Features Save Field: Not Supported 00:23:12.467 Reservations: Not Supported 00:23:12.467 Timestamp: Not Supported 00:23:12.467 Copy: Not Supported 00:23:12.467 Volatile Write Cache: Not Present 00:23:12.467 Atomic Write Unit (Normal): 1 00:23:12.467 Atomic Write Unit (PFail): 1 00:23:12.467 Atomic Compare & Write Unit: 1 00:23:12.467 Fused Compare & Write: Supported 00:23:12.467 Scatter-Gather List 00:23:12.467 SGL Command Set: Supported 00:23:12.467 SGL Keyed: Supported 00:23:12.467 SGL Bit Bucket Descriptor: Not Supported 00:23:12.467 SGL Metadata Pointer: Not Supported 00:23:12.467 Oversized SGL: Not Supported 00:23:12.467 SGL Metadata Address: Not Supported 00:23:12.467 SGL Offset: Supported 00:23:12.467 Transport SGL Data Block: Not Supported 00:23:12.467 Replay Protected Memory Block: Not Supported 00:23:12.467 00:23:12.467 Firmware Slot Information 00:23:12.467 ========================= 00:23:12.467 Active slot: 0 00:23:12.467 00:23:12.467 00:23:12.467 Error Log 00:23:12.467 ========= 00:23:12.467 00:23:12.467 Active Namespaces 00:23:12.467 ================= 00:23:12.467 Discovery Log Page 00:23:12.467 ================== 00:23:12.467 Generation Counter: 2 00:23:12.467 Number of Records: 2 00:23:12.467 Record Format: 0 00:23:12.467 00:23:12.467 Discovery Log Entry 0 00:23:12.467 ---------------------- 00:23:12.467 Transport Type: 3 (TCP) 00:23:12.467 Address Family: 1 (IPv4) 00:23:12.467 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:12.467 Entry Flags: 00:23:12.467 Duplicate Returned Information: 1 00:23:12.467 Explicit Persistent Connection Support for Discovery: 1 00:23:12.467 Transport Requirements: 00:23:12.467 Secure Channel: Not Required 00:23:12.467 Port ID: 0 (0x0000) 00:23:12.467 Controller ID: 65535 (0xffff) 00:23:12.467 Admin Max SQ Size: 128 00:23:12.467 Transport Service Identifier: 4420 00:23:12.467 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:12.467 Transport Address: 10.0.0.2 00:23:12.467 Discovery Log Entry 1 00:23:12.467 ---------------------- 00:23:12.467 Transport Type: 3 (TCP) 00:23:12.467 Address Family: 1 (IPv4) 00:23:12.467 Subsystem Type: 2 (NVM Subsystem) 00:23:12.467 Entry Flags: 00:23:12.467 Duplicate Returned Information: 0 00:23:12.467 Explicit Persistent Connection Support for Discovery: 0 00:23:12.467 Transport Requirements: 00:23:12.467 Secure Channel: Not Required 00:23:12.467 Port ID: 0 (0x0000) 00:23:12.467 Controller ID: 65535 (0xffff) 00:23:12.467 Admin Max SQ Size: 128 00:23:12.467 Transport Service Identifier: 4420 00:23:12.467 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:12.467 Transport Address: 10.0.0.2 [2024-11-28 08:21:54.709179] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:12.467 [2024-11-28 08:21:54.709191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2100) on tqpair=0xd50690 00:23:12.467 [2024-11-28 08:21:54.709197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.467 [2024-11-28 08:21:54.709202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2280) on tqpair=0xd50690 00:23:12.467 [2024-11-28 08:21:54.709206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.467 [2024-11-28 08:21:54.709210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2400) on tqpair=0xd50690 00:23:12.467 [2024-11-28 08:21:54.709214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.467 [2024-11-28 08:21:54.709218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.467 [2024-11-28 08:21:54.709223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.467 [2024-11-28 08:21:54.709232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.467 [2024-11-28 08:21:54.709236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.467 [2024-11-28 08:21:54.709239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.467 [2024-11-28 08:21:54.709246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.467 [2024-11-28 08:21:54.709260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.467 [2024-11-28 08:21:54.709328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.467 [2024-11-28 08:21:54.709335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.467 [2024-11-28 08:21:54.709338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.467 [2024-11-28 08:21:54.709342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.467 [2024-11-28 08:21:54.709348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.467 [2024-11-28 08:21:54.709351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.467 [2024-11-28 08:21:54.709354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.467 [2024-11-28 08:21:54.709360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.468 [2024-11-28 08:21:54.709373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.468 [2024-11-28 08:21:54.709477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.468 [2024-11-28 08:21:54.709483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.468 [2024-11-28 08:21:54.709486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.468 [2024-11-28 08:21:54.709493] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:12.468 [2024-11-28 08:21:54.709498] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:12.468 [2024-11-28 08:21:54.709506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.468 [2024-11-28 08:21:54.709519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.468 [2024-11-28 08:21:54.709528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.468 [2024-11-28 08:21:54.709589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.468 [2024-11-28 08:21:54.709595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.468 [2024-11-28 08:21:54.709598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.468 [2024-11-28 08:21:54.709610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.468 [2024-11-28 08:21:54.709623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.468 [2024-11-28 08:21:54.709632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.468 [2024-11-28 08:21:54.709729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.468 [2024-11-28 08:21:54.709735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.468 [2024-11-28 08:21:54.709738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.468 [2024-11-28 08:21:54.709752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.468 [2024-11-28 08:21:54.709764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.468 [2024-11-28 08:21:54.709774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.468 [2024-11-28 08:21:54.709900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.468 [2024-11-28 08:21:54.709905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.468 [2024-11-28 08:21:54.709909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.468 [2024-11-28 08:21:54.709921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.709928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.468 [2024-11-28 08:21:54.709934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.468 [2024-11-28 08:21:54.709943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.468 [2024-11-28 08:21:54.710033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.468 [2024-11-28 08:21:54.710039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.468 [2024-11-28 08:21:54.710043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.468 [2024-11-28 08:21:54.710054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.468 [2024-11-28 08:21:54.710067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.468 [2024-11-28 08:21:54.710077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.468 [2024-11-28 08:21:54.710140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.468 [2024-11-28 08:21:54.710145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.468 [2024-11-28 08:21:54.710149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.468 [2024-11-28 08:21:54.710160] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.468 [2024-11-28 08:21:54.710173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.468 [2024-11-28 08:21:54.710182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.468 [2024-11-28 08:21:54.710283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.468 [2024-11-28 08:21:54.710288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.468 [2024-11-28 08:21:54.710292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.468 [2024-11-28 08:21:54.710307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.468 [2024-11-28 08:21:54.710320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.468 [2024-11-28 08:21:54.710329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.468 [2024-11-28 08:21:54.710434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.468 [2024-11-28 08:21:54.710440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.468 [2024-11-28 08:21:54.710443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.468 [2024-11-28 08:21:54.710455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.468 [2024-11-28 08:21:54.710467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.468 [2024-11-28 08:21:54.710477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.468 [2024-11-28 08:21:54.710597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.468 [2024-11-28 08:21:54.710602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.468 [2024-11-28 08:21:54.710605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.468 [2024-11-28 08:21:54.710617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.468 [2024-11-28 08:21:54.710630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.468 [2024-11-28 08:21:54.710640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.468 [2024-11-28 08:21:54.710702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.468 [2024-11-28 08:21:54.710708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.468 [2024-11-28 08:21:54.710711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.468 [2024-11-28 08:21:54.710714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.468 [2024-11-28 08:21:54.710722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.710726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.710729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.469 [2024-11-28 08:21:54.710735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.469 [2024-11-28 08:21:54.710744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.469 [2024-11-28 08:21:54.710850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.469 [2024-11-28 08:21:54.710856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.469 [2024-11-28 08:21:54.710859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.710862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.469 [2024-11-28 08:21:54.710872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.710877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.710880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.469 [2024-11-28 08:21:54.710886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.469 [2024-11-28 08:21:54.710896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.469 [2024-11-28 08:21:54.710989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.469 [2024-11-28 08:21:54.710995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.469 [2024-11-28 08:21:54.710998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.469 [2024-11-28 08:21:54.711010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.469 [2024-11-28 08:21:54.711023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.469 [2024-11-28 08:21:54.711032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.469 [2024-11-28 08:21:54.711140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.469 [2024-11-28 08:21:54.711146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.469 [2024-11-28 08:21:54.711149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.469 [2024-11-28 08:21:54.711161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.469 [2024-11-28 08:21:54.711173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.469 [2024-11-28 08:21:54.711183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.469 [2024-11-28 08:21:54.711243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.469 [2024-11-28 08:21:54.711249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.469 [2024-11-28 08:21:54.711252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.469 [2024-11-28 08:21:54.711264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.469 [2024-11-28 08:21:54.711276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.469 [2024-11-28 08:21:54.711286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.469 [2024-11-28 08:21:54.711392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.469 [2024-11-28 08:21:54.711398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.469 [2024-11-28 08:21:54.711401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.469 [2024-11-28 08:21:54.711413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.711422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.469 [2024-11-28 08:21:54.711428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.469 [2024-11-28 08:21:54.711437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.469 [2024-11-28 08:21:54.714954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.469 [2024-11-28 08:21:54.714963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.469 [2024-11-28 08:21:54.714966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.714970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.469 [2024-11-28 08:21:54.714980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.714984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.714987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd50690) 00:23:12.469 [2024-11-28 08:21:54.714993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.469 [2024-11-28 08:21:54.715004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdb2580, cid 3, qid 0 00:23:12.469 [2024-11-28 08:21:54.715189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.469 [2024-11-28 08:21:54.715195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.469 [2024-11-28 08:21:54.715198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.469 [2024-11-28 08:21:54.715202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdb2580) on tqpair=0xd50690 00:23:12.469 [2024-11-28 08:21:54.715208] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:23:12.469 00:23:12.733 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:12.733 [2024-11-28 08:21:54.751923] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:23:12.733 [2024-11-28 08:21:54.751970] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442725 ] 00:23:12.733 [2024-11-28 08:21:54.793381] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:12.733 [2024-11-28 08:21:54.793425] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:12.733 [2024-11-28 08:21:54.793430] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:12.733 [2024-11-28 08:21:54.793445] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:12.733 [2024-11-28 08:21:54.793453] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:12.733 [2024-11-28 08:21:54.793880] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:12.733 [2024-11-28 08:21:54.793905] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d11690 0 00:23:12.733 [2024-11-28 08:21:54.803959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:12.733 [2024-11-28 08:21:54.803973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:12.733 [2024-11-28 08:21:54.803976] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:12.733 [2024-11-28 08:21:54.803979] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:12.733 [2024-11-28 08:21:54.804009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.804014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.804017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d11690) 00:23:12.733 [2024-11-28 08:21:54.804027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:12.733 [2024-11-28 08:21:54.804044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73100, cid 0, qid 0 00:23:12.733 [2024-11-28 08:21:54.814958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.733 [2024-11-28 08:21:54.814966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.733 [2024-11-28 08:21:54.814970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.814974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73100) on tqpair=0x1d11690 00:23:12.733 [2024-11-28 08:21:54.814984] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:12.733 [2024-11-28 08:21:54.814990] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:12.733 [2024-11-28 08:21:54.814995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:12.733 [2024-11-28 08:21:54.815008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d11690) 00:23:12.733 [2024-11-28 08:21:54.815022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.733 [2024-11-28 08:21:54.815035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73100, cid 0, qid 0 00:23:12.733 [2024-11-28 08:21:54.815124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.733 [2024-11-28 08:21:54.815130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.733 [2024-11-28 08:21:54.815133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73100) on tqpair=0x1d11690 00:23:12.733 [2024-11-28 08:21:54.815143] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:12.733 [2024-11-28 08:21:54.815150] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:12.733 [2024-11-28 08:21:54.815156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d11690) 00:23:12.733 [2024-11-28 08:21:54.815168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.733 [2024-11-28 08:21:54.815178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73100, cid 0, qid 0 00:23:12.733 [2024-11-28 08:21:54.815246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.733 [2024-11-28 08:21:54.815252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.733 [2024-11-28 08:21:54.815255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73100) on tqpair=0x1d11690 00:23:12.733 [2024-11-28 08:21:54.815263] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:12.733 [2024-11-28 08:21:54.815270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:12.733 [2024-11-28 08:21:54.815276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d11690) 00:23:12.733 [2024-11-28 08:21:54.815291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.733 [2024-11-28 08:21:54.815301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73100, cid 0, qid 0 00:23:12.733 [2024-11-28 08:21:54.815367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.733 [2024-11-28 08:21:54.815373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.733 [2024-11-28 08:21:54.815376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73100) on tqpair=0x1d11690 00:23:12.733 [2024-11-28 08:21:54.815384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:12.733 [2024-11-28 08:21:54.815392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d11690) 00:23:12.733 [2024-11-28 08:21:54.815405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.733 [2024-11-28 08:21:54.815414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73100, cid 0, qid 0 00:23:12.733 [2024-11-28 08:21:54.815485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.733 [2024-11-28 08:21:54.815491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.733 [2024-11-28 08:21:54.815494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.733 [2024-11-28 08:21:54.815497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73100) on tqpair=0x1d11690 00:23:12.733 [2024-11-28 08:21:54.815501] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:12.734 [2024-11-28 08:21:54.815505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:12.734 [2024-11-28 08:21:54.815512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:12.734 [2024-11-28 08:21:54.815619] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:12.734 [2024-11-28 08:21:54.815623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:12.734 [2024-11-28 08:21:54.815630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.815633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.815637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d11690) 00:23:12.734 [2024-11-28 08:21:54.815642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.734 [2024-11-28 08:21:54.815652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73100, cid 0, qid 0 00:23:12.734 [2024-11-28 08:21:54.815714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.734 [2024-11-28 08:21:54.815720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.734 [2024-11-28 08:21:54.815723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.815726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73100) on tqpair=0x1d11690 00:23:12.734 [2024-11-28 08:21:54.815730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:12.734 [2024-11-28 08:21:54.815742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.815746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.815749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d11690) 00:23:12.734 [2024-11-28 08:21:54.815755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.734 [2024-11-28 08:21:54.815764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73100, cid 0, qid 0 00:23:12.734 [2024-11-28 08:21:54.815835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.734 [2024-11-28 08:21:54.815840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.734 [2024-11-28 08:21:54.815843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.815846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73100) on tqpair=0x1d11690 00:23:12.734 [2024-11-28 08:21:54.815850] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:12.734 [2024-11-28 08:21:54.815854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:12.734 [2024-11-28 08:21:54.815861] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:12.734 [2024-11-28 08:21:54.815871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:12.734 [2024-11-28 08:21:54.815879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.815882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d11690) 00:23:12.734 [2024-11-28 08:21:54.815888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.734 [2024-11-28 08:21:54.815898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73100, cid 0, qid 0 00:23:12.734 [2024-11-28 08:21:54.816001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.734 [2024-11-28 08:21:54.816008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.734 [2024-11-28 08:21:54.816011] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816014] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d11690): datao=0, datal=4096, cccid=0 00:23:12.734 [2024-11-28 08:21:54.816018] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d73100) on tqpair(0x1d11690): expected_datao=0, payload_size=4096 00:23:12.734 [2024-11-28 08:21:54.816022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816028] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816031] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.734 [2024-11-28 08:21:54.816049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.734 [2024-11-28 08:21:54.816052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73100) on tqpair=0x1d11690 00:23:12.734 [2024-11-28 08:21:54.816062] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:12.734 [2024-11-28 08:21:54.816066] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:12.734 [2024-11-28 08:21:54.816070] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:12.734 [2024-11-28 08:21:54.816074] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:12.734 [2024-11-28 08:21:54.816080] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:12.734 [2024-11-28 08:21:54.816084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:12.734 [2024-11-28 08:21:54.816091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:12.734 [2024-11-28 08:21:54.816097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d11690) 00:23:12.734 [2024-11-28 08:21:54.816109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:12.734 [2024-11-28 08:21:54.816120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73100, cid 0, qid 0 00:23:12.734 [2024-11-28 08:21:54.816186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.734 [2024-11-28 08:21:54.816192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.734 [2024-11-28 08:21:54.816194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73100) on tqpair=0x1d11690 00:23:12.734 [2024-11-28 08:21:54.816203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d11690) 00:23:12.734 [2024-11-28 08:21:54.816215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.734 [2024-11-28 08:21:54.816220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d11690) 00:23:12.734 [2024-11-28 08:21:54.816231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.734 [2024-11-28 08:21:54.816237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d11690) 00:23:12.734 [2024-11-28 08:21:54.816248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.734 [2024-11-28 08:21:54.816253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d11690) 00:23:12.734 [2024-11-28 08:21:54.816264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.734 [2024-11-28 08:21:54.816268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:12.734 [2024-11-28 08:21:54.816278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:12.734 [2024-11-28 08:21:54.816284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d11690) 00:23:12.734 [2024-11-28 08:21:54.816293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.734 [2024-11-28 08:21:54.816305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73100, cid 0, qid 0 00:23:12.734 [2024-11-28 08:21:54.816310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73280, cid 1, qid 0 00:23:12.734 [2024-11-28 08:21:54.816314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73400, cid 2, qid 0 00:23:12.734 [2024-11-28 08:21:54.816318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73580, cid 3, qid 0 00:23:12.734 [2024-11-28 08:21:54.816322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73700, cid 4, qid 0 00:23:12.734 [2024-11-28 08:21:54.816419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.734 [2024-11-28 08:21:54.816425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.734 [2024-11-28 08:21:54.816428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73700) on tqpair=0x1d11690 00:23:12.734 [2024-11-28 08:21:54.816435] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:12.734 [2024-11-28 08:21:54.816440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:12.734 [2024-11-28 08:21:54.816449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:12.734 [2024-11-28 08:21:54.816455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:12.734 [2024-11-28 08:21:54.816460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.734 [2024-11-28 08:21:54.816467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d11690) 00:23:12.735 [2024-11-28 08:21:54.816472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:12.735 [2024-11-28 08:21:54.816482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73700, cid 4, qid 0 00:23:12.735 [2024-11-28 08:21:54.816553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.735 [2024-11-28 08:21:54.816558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.735 [2024-11-28 08:21:54.816561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.816565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73700) on tqpair=0x1d11690 00:23:12.735 [2024-11-28 08:21:54.816617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.816626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.816632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.816636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d11690) 00:23:12.735 [2024-11-28 08:21:54.816641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.735 [2024-11-28 08:21:54.816651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73700, cid 4, qid 0 00:23:12.735 [2024-11-28 08:21:54.816726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.735 [2024-11-28 08:21:54.816732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.735 [2024-11-28 08:21:54.816734] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.816738] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d11690): datao=0, datal=4096, cccid=4 00:23:12.735 [2024-11-28 08:21:54.816742] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d73700) on tqpair(0x1d11690): expected_datao=0, payload_size=4096 00:23:12.735 [2024-11-28 08:21:54.816747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.816765] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.816769] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.816805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.735 [2024-11-28 08:21:54.816810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.735 [2024-11-28 08:21:54.816813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.816816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73700) on tqpair=0x1d11690 00:23:12.735 [2024-11-28 08:21:54.816828] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:12.735 [2024-11-28 08:21:54.816835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.816844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.816850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.816853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d11690) 00:23:12.735 [2024-11-28 08:21:54.816859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.735 [2024-11-28 08:21:54.816869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73700, cid 4, qid 0 00:23:12.735 [2024-11-28 08:21:54.816960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.735 [2024-11-28 08:21:54.816966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.735 [2024-11-28 08:21:54.816969] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.816972] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d11690): datao=0, datal=4096, cccid=4 00:23:12.735 [2024-11-28 08:21:54.816976] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d73700) on tqpair(0x1d11690): expected_datao=0, payload_size=4096 00:23:12.735 [2024-11-28 08:21:54.816979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.816990] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.816993] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.735 [2024-11-28 08:21:54.817035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.735 [2024-11-28 08:21:54.817038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73700) on tqpair=0x1d11690 00:23:12.735 [2024-11-28 08:21:54.817050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.817060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.817066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d11690) 00:23:12.735 [2024-11-28 08:21:54.817076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.735 [2024-11-28 08:21:54.817086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73700, cid 4, qid 0 00:23:12.735 [2024-11-28 08:21:54.817161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.735 [2024-11-28 08:21:54.817167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.735 [2024-11-28 08:21:54.817171] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817174] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d11690): datao=0, datal=4096, cccid=4 00:23:12.735 [2024-11-28 08:21:54.817178] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d73700) on tqpair(0x1d11690): expected_datao=0, payload_size=4096 00:23:12.735 [2024-11-28 08:21:54.817182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817193] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817196] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.735 [2024-11-28 08:21:54.817235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.735 [2024-11-28 08:21:54.817238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73700) on tqpair=0x1d11690 00:23:12.735 [2024-11-28 08:21:54.817250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.817258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.817265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.817270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.817275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.817280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.817284] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:12.735 [2024-11-28 08:21:54.817288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:12.735 [2024-11-28 08:21:54.817293] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:12.735 [2024-11-28 08:21:54.817306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d11690) 00:23:12.735 [2024-11-28 08:21:54.817315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.735 [2024-11-28 08:21:54.817321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d11690) 00:23:12.735 [2024-11-28 08:21:54.817333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.735 [2024-11-28 08:21:54.817345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73700, cid 4, qid 0 00:23:12.735 [2024-11-28 08:21:54.817350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73880, cid 5, qid 0 00:23:12.735 [2024-11-28 08:21:54.817447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.735 [2024-11-28 08:21:54.817453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.735 [2024-11-28 08:21:54.817456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73700) on tqpair=0x1d11690 00:23:12.735 [2024-11-28 08:21:54.817466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.735 [2024-11-28 08:21:54.817471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.735 [2024-11-28 08:21:54.817474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73880) on tqpair=0x1d11690 00:23:12.735 [2024-11-28 08:21:54.817486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d11690) 00:23:12.735 [2024-11-28 08:21:54.817495] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.735 [2024-11-28 08:21:54.817505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73880, cid 5, qid 0 00:23:12.735 [2024-11-28 08:21:54.817585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.735 [2024-11-28 08:21:54.817591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.735 [2024-11-28 08:21:54.817593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73880) on tqpair=0x1d11690 00:23:12.735 [2024-11-28 08:21:54.817605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.735 [2024-11-28 08:21:54.817609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d11690) 00:23:12.736 [2024-11-28 08:21:54.817614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.736 [2024-11-28 08:21:54.817624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73880, cid 5, qid 0 00:23:12.736 [2024-11-28 08:21:54.817694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.736 [2024-11-28 08:21:54.817700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.736 [2024-11-28 08:21:54.817703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.817706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73880) on tqpair=0x1d11690 00:23:12.736 [2024-11-28 08:21:54.817713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.817717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d11690) 00:23:12.736 [2024-11-28 08:21:54.817722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.736 [2024-11-28 08:21:54.817731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73880, cid 5, qid 0 00:23:12.736 [2024-11-28 08:21:54.817799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.736 [2024-11-28 08:21:54.817805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.736 [2024-11-28 08:21:54.817808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.817811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73880) on tqpair=0x1d11690 00:23:12.736 [2024-11-28 08:21:54.817825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.817830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d11690) 00:23:12.736 [2024-11-28 08:21:54.817835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.736 [2024-11-28 08:21:54.817841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.817844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d11690) 00:23:12.736 [2024-11-28 08:21:54.817849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.736 [2024-11-28 08:21:54.817857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.817861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d11690) 00:23:12.736 [2024-11-28 08:21:54.817866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.736 [2024-11-28 08:21:54.817872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.817875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d11690) 00:23:12.736 [2024-11-28 08:21:54.817880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.736 [2024-11-28 08:21:54.817891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73880, cid 5, qid 0 00:23:12.736 [2024-11-28 08:21:54.817896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73700, cid 4, qid 0 00:23:12.736 [2024-11-28 08:21:54.817900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73a00, cid 6, qid 0 00:23:12.736 [2024-11-28 08:21:54.817904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73b80, cid 7, qid 0 00:23:12.736 [2024-11-28 08:21:54.818052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.736 [2024-11-28 08:21:54.818058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.736 [2024-11-28 08:21:54.818062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818065] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d11690): datao=0, datal=8192, cccid=5 00:23:12.736 [2024-11-28 08:21:54.818069] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d73880) on tqpair(0x1d11690): expected_datao=0, payload_size=8192 00:23:12.736 [2024-11-28 08:21:54.818073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818085] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818089] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.736 [2024-11-28 08:21:54.818102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.736 [2024-11-28 08:21:54.818105] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818108] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d11690): datao=0, datal=512, cccid=4 00:23:12.736 [2024-11-28 08:21:54.818112] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d73700) on tqpair(0x1d11690): expected_datao=0, payload_size=512 00:23:12.736 [2024-11-28 08:21:54.818116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818121] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818124] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.736 [2024-11-28 08:21:54.818134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.736 [2024-11-28 08:21:54.818136] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818139] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d11690): datao=0, datal=512, cccid=6 00:23:12.736 [2024-11-28 08:21:54.818143] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d73a00) on tqpair(0x1d11690): expected_datao=0, payload_size=512 00:23:12.736 [2024-11-28 08:21:54.818147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818152] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818155] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:12.736 [2024-11-28 08:21:54.818165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:12.736 [2024-11-28 08:21:54.818169] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818172] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d11690): datao=0, datal=4096, cccid=7 00:23:12.736 [2024-11-28 08:21:54.818176] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d73b80) on tqpair(0x1d11690): expected_datao=0, payload_size=4096 00:23:12.736 [2024-11-28 08:21:54.818180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818186] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818189] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.736 [2024-11-28 08:21:54.818200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.736 [2024-11-28 08:21:54.818203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73880) on tqpair=0x1d11690 00:23:12.736 [2024-11-28 08:21:54.818216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.736 [2024-11-28 08:21:54.818221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.736 [2024-11-28 08:21:54.818224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73700) on tqpair=0x1d11690 00:23:12.736 [2024-11-28 08:21:54.818236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.736 [2024-11-28 08:21:54.818241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.736 [2024-11-28 08:21:54.818244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73a00) on tqpair=0x1d11690 00:23:12.736 [2024-11-28 08:21:54.818253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.736 [2024-11-28 08:21:54.818257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.736 [2024-11-28 08:21:54.818261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.736 [2024-11-28 08:21:54.818264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73b80) on tqpair=0x1d11690 00:23:12.736 ===================================================== 00:23:12.736 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.736 ===================================================== 00:23:12.736 Controller Capabilities/Features 00:23:12.736 ================================ 00:23:12.736 Vendor ID: 8086 00:23:12.736 Subsystem Vendor ID: 8086 00:23:12.736 Serial Number: SPDK00000000000001 00:23:12.736 Model Number: SPDK bdev Controller 00:23:12.736 Firmware Version: 25.01 00:23:12.736 Recommended Arb Burst: 6 00:23:12.736 IEEE OUI Identifier: e4 d2 5c 00:23:12.736 Multi-path I/O 00:23:12.736 May have multiple subsystem ports: Yes 00:23:12.736 May have multiple controllers: Yes 00:23:12.736 Associated with SR-IOV VF: No 00:23:12.736 Max Data Transfer Size: 131072 00:23:12.736 Max Number of Namespaces: 32 00:23:12.736 Max Number of I/O Queues: 127 00:23:12.736 NVMe Specification Version (VS): 1.3 00:23:12.736 NVMe Specification Version (Identify): 1.3 00:23:12.736 Maximum Queue Entries: 128 00:23:12.736 Contiguous Queues Required: Yes 00:23:12.736 Arbitration Mechanisms Supported 00:23:12.736 Weighted Round Robin: Not Supported 00:23:12.736 Vendor Specific: Not Supported 00:23:12.736 Reset Timeout: 15000 ms 00:23:12.736 Doorbell Stride: 4 bytes 00:23:12.736 NVM Subsystem Reset: Not Supported 00:23:12.736 Command Sets Supported 00:23:12.736 NVM Command Set: Supported 00:23:12.736 Boot Partition: Not Supported 00:23:12.736 Memory Page Size Minimum: 4096 bytes 00:23:12.736 Memory Page Size Maximum: 4096 bytes 00:23:12.736 Persistent Memory Region: Not Supported 00:23:12.736 Optional Asynchronous Events Supported 00:23:12.736 Namespace Attribute Notices: Supported 00:23:12.736 Firmware Activation Notices: Not Supported 00:23:12.736 ANA Change Notices: Not Supported 00:23:12.736 PLE Aggregate Log Change Notices: Not Supported 00:23:12.736 LBA Status Info Alert Notices: Not Supported 00:23:12.736 EGE Aggregate Log Change Notices: Not Supported 00:23:12.736 Normal NVM Subsystem Shutdown event: Not Supported 00:23:12.736 Zone Descriptor Change Notices: Not Supported 00:23:12.736 Discovery Log Change Notices: Not Supported 00:23:12.736 Controller Attributes 00:23:12.737 128-bit Host Identifier: Supported 00:23:12.737 Non-Operational Permissive Mode: Not Supported 00:23:12.737 NVM Sets: Not Supported 00:23:12.737 Read Recovery Levels: Not Supported 00:23:12.737 Endurance Groups: Not Supported 00:23:12.737 Predictable Latency Mode: Not Supported 00:23:12.737 Traffic Based Keep ALive: Not Supported 00:23:12.737 Namespace Granularity: Not Supported 00:23:12.737 SQ Associations: Not Supported 00:23:12.737 UUID List: Not Supported 00:23:12.737 Multi-Domain Subsystem: Not Supported 00:23:12.737 Fixed Capacity Management: Not Supported 00:23:12.737 Variable Capacity Management: Not Supported 00:23:12.737 Delete Endurance Group: Not Supported 00:23:12.737 Delete NVM Set: Not Supported 00:23:12.737 Extended LBA Formats Supported: Not Supported 00:23:12.737 Flexible Data Placement Supported: Not Supported 00:23:12.737 00:23:12.737 Controller Memory Buffer Support 00:23:12.737 ================================ 00:23:12.737 Supported: No 00:23:12.737 00:23:12.737 Persistent Memory Region Support 00:23:12.737 ================================ 00:23:12.737 Supported: No 00:23:12.737 00:23:12.737 Admin Command Set Attributes 00:23:12.737 ============================ 00:23:12.737 Security Send/Receive: Not Supported 00:23:12.737 Format NVM: Not Supported 00:23:12.737 Firmware Activate/Download: Not Supported 00:23:12.737 Namespace Management: Not Supported 00:23:12.737 Device Self-Test: Not Supported 00:23:12.737 Directives: Not Supported 00:23:12.737 NVMe-MI: Not Supported 00:23:12.737 Virtualization Management: Not Supported 00:23:12.737 Doorbell Buffer Config: Not Supported 00:23:12.737 Get LBA Status Capability: Not Supported 00:23:12.737 Command & Feature Lockdown Capability: Not Supported 00:23:12.737 Abort Command Limit: 4 00:23:12.737 Async Event Request Limit: 4 00:23:12.737 Number of Firmware Slots: N/A 00:23:12.737 Firmware Slot 1 Read-Only: N/A 00:23:12.737 Firmware Activation Without Reset: N/A 00:23:12.737 Multiple Update Detection Support: N/A 00:23:12.737 Firmware Update Granularity: No Information Provided 00:23:12.737 Per-Namespace SMART Log: No 00:23:12.737 Asymmetric Namespace Access Log Page: Not Supported 00:23:12.737 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:12.737 Command Effects Log Page: Supported 00:23:12.737 Get Log Page Extended Data: Supported 00:23:12.737 Telemetry Log Pages: Not Supported 00:23:12.737 Persistent Event Log Pages: Not Supported 00:23:12.737 Supported Log Pages Log Page: May Support 00:23:12.737 Commands Supported & Effects Log Page: Not Supported 00:23:12.737 Feature Identifiers & Effects Log Page:May Support 00:23:12.737 NVMe-MI Commands & Effects Log Page: May Support 00:23:12.737 Data Area 4 for Telemetry Log: Not Supported 00:23:12.737 Error Log Page Entries Supported: 128 00:23:12.737 Keep Alive: Supported 00:23:12.737 Keep Alive Granularity: 10000 ms 00:23:12.737 00:23:12.737 NVM Command Set Attributes 00:23:12.737 ========================== 00:23:12.737 Submission Queue Entry Size 00:23:12.737 Max: 64 00:23:12.737 Min: 64 00:23:12.737 Completion Queue Entry Size 00:23:12.737 Max: 16 00:23:12.737 Min: 16 00:23:12.737 Number of Namespaces: 32 00:23:12.737 Compare Command: Supported 00:23:12.737 Write Uncorrectable Command: Not Supported 00:23:12.737 Dataset Management Command: Supported 00:23:12.737 Write Zeroes Command: Supported 00:23:12.737 Set Features Save Field: Not Supported 00:23:12.737 Reservations: Supported 00:23:12.737 Timestamp: Not Supported 00:23:12.737 Copy: Supported 00:23:12.737 Volatile Write Cache: Present 00:23:12.737 Atomic Write Unit (Normal): 1 00:23:12.737 Atomic Write Unit (PFail): 1 00:23:12.737 Atomic Compare & Write Unit: 1 00:23:12.737 Fused Compare & Write: Supported 00:23:12.737 Scatter-Gather List 00:23:12.737 SGL Command Set: Supported 00:23:12.737 SGL Keyed: Supported 00:23:12.737 SGL Bit Bucket Descriptor: Not Supported 00:23:12.737 SGL Metadata Pointer: Not Supported 00:23:12.737 Oversized SGL: Not Supported 00:23:12.737 SGL Metadata Address: Not Supported 00:23:12.737 SGL Offset: Supported 00:23:12.737 Transport SGL Data Block: Not Supported 00:23:12.737 Replay Protected Memory Block: Not Supported 00:23:12.737 00:23:12.737 Firmware Slot Information 00:23:12.737 ========================= 00:23:12.737 Active slot: 1 00:23:12.737 Slot 1 Firmware Revision: 25.01 00:23:12.737 00:23:12.737 00:23:12.737 Commands Supported and Effects 00:23:12.737 ============================== 00:23:12.737 Admin Commands 00:23:12.737 -------------- 00:23:12.737 Get Log Page (02h): Supported 00:23:12.737 Identify (06h): Supported 00:23:12.737 Abort (08h): Supported 00:23:12.737 Set Features (09h): Supported 00:23:12.737 Get Features (0Ah): Supported 00:23:12.737 Asynchronous Event Request (0Ch): Supported 00:23:12.737 Keep Alive (18h): Supported 00:23:12.737 I/O Commands 00:23:12.737 ------------ 00:23:12.737 Flush (00h): Supported LBA-Change 00:23:12.737 Write (01h): Supported LBA-Change 00:23:12.737 Read (02h): Supported 00:23:12.737 Compare (05h): Supported 00:23:12.737 Write Zeroes (08h): Supported LBA-Change 00:23:12.737 Dataset Management (09h): Supported LBA-Change 00:23:12.737 Copy (19h): Supported LBA-Change 00:23:12.737 00:23:12.737 Error Log 00:23:12.737 ========= 00:23:12.737 00:23:12.737 Arbitration 00:23:12.737 =========== 00:23:12.737 Arbitration Burst: 1 00:23:12.737 00:23:12.737 Power Management 00:23:12.737 ================ 00:23:12.737 Number of Power States: 1 00:23:12.737 Current Power State: Power State #0 00:23:12.737 Power State #0: 00:23:12.737 Max Power: 0.00 W 00:23:12.737 Non-Operational State: Operational 00:23:12.737 Entry Latency: Not Reported 00:23:12.737 Exit Latency: Not Reported 00:23:12.737 Relative Read Throughput: 0 00:23:12.737 Relative Read Latency: 0 00:23:12.737 Relative Write Throughput: 0 00:23:12.737 Relative Write Latency: 0 00:23:12.737 Idle Power: Not Reported 00:23:12.737 Active Power: Not Reported 00:23:12.737 Non-Operational Permissive Mode: Not Supported 00:23:12.737 00:23:12.737 Health Information 00:23:12.737 ================== 00:23:12.737 Critical Warnings: 00:23:12.737 Available Spare Space: OK 00:23:12.737 Temperature: OK 00:23:12.737 Device Reliability: OK 00:23:12.737 Read Only: No 00:23:12.737 Volatile Memory Backup: OK 00:23:12.737 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:12.737 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:12.737 Available Spare: 0% 00:23:12.737 Available Spare Threshold: 0% 00:23:12.737 Life Percentage Used:[2024-11-28 08:21:54.818342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.737 [2024-11-28 08:21:54.818346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d11690) 00:23:12.737 [2024-11-28 08:21:54.818352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.737 [2024-11-28 08:21:54.818364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73b80, cid 7, qid 0 00:23:12.737 [2024-11-28 08:21:54.818434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.737 [2024-11-28 08:21:54.818440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.737 [2024-11-28 08:21:54.818443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.737 [2024-11-28 08:21:54.818446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73b80) on tqpair=0x1d11690 00:23:12.737 [2024-11-28 08:21:54.818475] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:12.737 [2024-11-28 08:21:54.818484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73100) on tqpair=0x1d11690 00:23:12.737 [2024-11-28 08:21:54.818489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.738 [2024-11-28 08:21:54.818494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73280) on tqpair=0x1d11690 00:23:12.738 [2024-11-28 08:21:54.818498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.738 [2024-11-28 08:21:54.818502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73400) on tqpair=0x1d11690 00:23:12.738 [2024-11-28 08:21:54.818506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.738 [2024-11-28 08:21:54.818512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73580) on tqpair=0x1d11690 00:23:12.738 [2024-11-28 08:21:54.818516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.738 [2024-11-28 08:21:54.818523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.818526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.818529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d11690) 00:23:12.738 [2024-11-28 08:21:54.818535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.738 [2024-11-28 08:21:54.818546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73580, cid 3, qid 0 00:23:12.738 [2024-11-28 08:21:54.818621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.738 [2024-11-28 08:21:54.818626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.738 [2024-11-28 08:21:54.818629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.818633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73580) on tqpair=0x1d11690 00:23:12.738 [2024-11-28 08:21:54.818638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.818641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.818645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d11690) 00:23:12.738 [2024-11-28 08:21:54.818650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.738 [2024-11-28 08:21:54.818662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73580, cid 3, qid 0 00:23:12.738 [2024-11-28 08:21:54.818740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.738 [2024-11-28 08:21:54.818745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.738 [2024-11-28 08:21:54.818748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.818751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73580) on tqpair=0x1d11690 00:23:12.738 [2024-11-28 08:21:54.818755] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:12.738 [2024-11-28 08:21:54.818759] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:12.738 [2024-11-28 08:21:54.818767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.818771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.818774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d11690) 00:23:12.738 [2024-11-28 08:21:54.818780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.738 [2024-11-28 08:21:54.818789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73580, cid 3, qid 0 00:23:12.738 [2024-11-28 08:21:54.818852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.738 [2024-11-28 08:21:54.818857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.738 [2024-11-28 08:21:54.818861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.818864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73580) on tqpair=0x1d11690 00:23:12.738 [2024-11-28 08:21:54.818872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.818875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.818878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d11690) 00:23:12.738 [2024-11-28 08:21:54.818884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.738 [2024-11-28 08:21:54.818895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73580, cid 3, qid 0 00:23:12.738 [2024-11-28 08:21:54.822955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.738 [2024-11-28 08:21:54.822963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.738 [2024-11-28 08:21:54.822966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.822969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73580) on tqpair=0x1d11690 00:23:12.738 [2024-11-28 08:21:54.822979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.822982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.822985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d11690) 00:23:12.738 [2024-11-28 08:21:54.822991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.738 [2024-11-28 08:21:54.823002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d73580, cid 3, qid 0 00:23:12.738 [2024-11-28 08:21:54.823074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:12.738 [2024-11-28 08:21:54.823079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:12.738 [2024-11-28 08:21:54.823082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:12.738 [2024-11-28 08:21:54.823085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d73580) on tqpair=0x1d11690 00:23:12.738 [2024-11-28 08:21:54.823092] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:23:12.738 0% 00:23:12.738 Data Units Read: 0 00:23:12.738 Data Units Written: 0 00:23:12.738 Host Read Commands: 0 00:23:12.738 Host Write Commands: 0 00:23:12.738 Controller Busy Time: 0 minutes 00:23:12.738 Power Cycles: 0 00:23:12.738 Power On Hours: 0 hours 00:23:12.738 Unsafe Shutdowns: 0 00:23:12.738 Unrecoverable Media Errors: 0 00:23:12.738 Lifetime Error Log Entries: 0 00:23:12.738 Warning Temperature Time: 0 minutes 00:23:12.738 Critical Temperature Time: 0 minutes 00:23:12.738 00:23:12.738 Number of Queues 00:23:12.738 ================ 00:23:12.738 Number of I/O Submission Queues: 127 00:23:12.738 Number of I/O Completion Queues: 127 00:23:12.738 00:23:12.738 Active Namespaces 00:23:12.738 ================= 00:23:12.738 Namespace ID:1 00:23:12.738 Error Recovery Timeout: Unlimited 00:23:12.738 Command Set Identifier: NVM (00h) 00:23:12.738 Deallocate: Supported 00:23:12.738 Deallocated/Unwritten Error: Not Supported 00:23:12.738 Deallocated Read Value: Unknown 00:23:12.738 Deallocate in Write Zeroes: Not Supported 00:23:12.738 Deallocated Guard Field: 0xFFFF 00:23:12.738 Flush: Supported 00:23:12.738 Reservation: Supported 00:23:12.738 Namespace Sharing Capabilities: Multiple Controllers 00:23:12.738 Size (in LBAs): 131072 (0GiB) 00:23:12.738 Capacity (in LBAs): 131072 (0GiB) 00:23:12.738 Utilization (in LBAs): 131072 (0GiB) 00:23:12.738 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:12.738 EUI64: ABCDEF0123456789 00:23:12.738 UUID: 581cf719-77fc-44e3-bf62-3dc3ca2d1c4b 00:23:12.738 Thin Provisioning: Not Supported 00:23:12.738 Per-NS Atomic Units: Yes 00:23:12.738 Atomic Boundary Size (Normal): 0 00:23:12.738 Atomic Boundary Size (PFail): 0 00:23:12.738 Atomic Boundary Offset: 0 00:23:12.738 Maximum Single Source Range Length: 65535 00:23:12.738 Maximum Copy Length: 65535 00:23:12.738 Maximum Source Range Count: 1 00:23:12.738 NGUID/EUI64 Never Reused: No 00:23:12.738 Namespace Write Protected: No 00:23:12.738 Number of LBA Formats: 1 00:23:12.738 Current LBA Format: LBA Format #00 00:23:12.738 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:12.738 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.738 rmmod nvme_tcp 00:23:12.738 rmmod nvme_fabrics 00:23:12.738 rmmod nvme_keyring 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1442520 ']' 00:23:12.738 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1442520 00:23:12.739 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1442520 ']' 00:23:12.739 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1442520 00:23:12.739 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:12.739 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.739 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1442520 00:23:12.739 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:12.739 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:12.739 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1442520' 00:23:12.739 killing process with pid 1442520 00:23:12.739 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1442520 00:23:12.739 08:21:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1442520 00:23:12.998 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.998 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.998 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.998 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:12.998 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.998 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:12.999 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.999 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.999 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.999 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.999 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.999 08:21:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.535 00:23:15.535 real 0m8.796s 00:23:15.535 user 0m4.952s 00:23:15.535 sys 0m4.598s 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.535 ************************************ 00:23:15.535 END TEST nvmf_identify 00:23:15.535 ************************************ 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.535 ************************************ 00:23:15.535 START TEST nvmf_perf 00:23:15.535 ************************************ 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:15.535 * Looking for test storage... 00:23:15.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:15.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.535 --rc genhtml_branch_coverage=1 00:23:15.535 --rc genhtml_function_coverage=1 00:23:15.535 --rc genhtml_legend=1 00:23:15.535 --rc geninfo_all_blocks=1 00:23:15.535 --rc geninfo_unexecuted_blocks=1 00:23:15.535 00:23:15.535 ' 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:15.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.535 --rc genhtml_branch_coverage=1 00:23:15.535 --rc genhtml_function_coverage=1 00:23:15.535 --rc genhtml_legend=1 00:23:15.535 --rc geninfo_all_blocks=1 00:23:15.535 --rc geninfo_unexecuted_blocks=1 00:23:15.535 00:23:15.535 ' 00:23:15.535 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:15.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.535 --rc genhtml_branch_coverage=1 00:23:15.535 --rc genhtml_function_coverage=1 00:23:15.535 --rc genhtml_legend=1 00:23:15.535 --rc geninfo_all_blocks=1 00:23:15.535 --rc geninfo_unexecuted_blocks=1 00:23:15.535 00:23:15.535 ' 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:15.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.536 --rc genhtml_branch_coverage=1 00:23:15.536 --rc genhtml_function_coverage=1 00:23:15.536 --rc genhtml_legend=1 00:23:15.536 --rc geninfo_all_blocks=1 00:23:15.536 --rc geninfo_unexecuted_blocks=1 00:23:15.536 00:23:15.536 ' 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.536 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:20.808 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:20.808 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:20.808 Found net devices under 0000:86:00.0: cvl_0_0 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:20.808 Found net devices under 0000:86:00.1: cvl_0_1 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.808 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:23:20.809 00:23:20.809 --- 10.0.0.2 ping statistics --- 00:23:20.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.809 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:23:20.809 00:23:20.809 --- 10.0.0.1 ping statistics --- 00:23:20.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.809 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1446080 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1446080 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1446080 ']' 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.809 08:22:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:20.809 [2024-11-28 08:22:02.968164] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:23:20.809 [2024-11-28 08:22:02.968217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.809 [2024-11-28 08:22:03.036310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:21.066 [2024-11-28 08:22:03.081055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.066 [2024-11-28 08:22:03.081093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.066 [2024-11-28 08:22:03.081101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.066 [2024-11-28 08:22:03.081108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.066 [2024-11-28 08:22:03.081114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.066 [2024-11-28 08:22:03.082587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.066 [2024-11-28 08:22:03.082685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.066 [2024-11-28 08:22:03.082772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:21.066 [2024-11-28 08:22:03.082774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.066 08:22:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.066 08:22:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:21.066 08:22:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.066 08:22:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.066 08:22:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:21.066 08:22:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.066 08:22:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:21.066 08:22:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:24.349 08:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:24.349 08:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:24.349 08:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:24.349 08:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:24.607 08:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:24.607 08:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:24.607 08:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:24.607 08:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:24.607 08:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.607 [2024-11-28 08:22:06.856283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.866 08:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.866 08:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:24.866 08:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.124 08:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:25.124 08:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:25.382 08:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.640 [2024-11-28 08:22:07.695430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.640 08:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:25.899 08:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:25.899 08:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:25.899 08:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:25.899 08:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:27.276 Initializing NVMe Controllers 00:23:27.276 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:27.276 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:27.276 Initialization complete. Launching workers. 00:23:27.276 ======================================================== 00:23:27.276 Latency(us) 00:23:27.276 Device Information : IOPS MiB/s Average min max 00:23:27.276 PCIE (0000:5e:00.0) NSID 1 from core 0: 97163.56 379.55 328.79 25.22 5196.37 00:23:27.276 ======================================================== 00:23:27.276 Total : 97163.56 379.55 328.79 25.22 5196.37 00:23:27.276 00:23:27.276 08:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:28.651 Initializing NVMe Controllers 00:23:28.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:28.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:28.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:28.651 Initialization complete. Launching workers. 00:23:28.651 ======================================================== 00:23:28.651 Latency(us) 00:23:28.651 Device Information : IOPS MiB/s Average min max 00:23:28.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 97.00 0.38 10614.86 116.54 45667.03 00:23:28.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 76.00 0.30 13232.46 7006.04 47899.28 00:23:28.651 ======================================================== 00:23:28.651 Total : 173.00 0.68 11764.79 116.54 47899.28 00:23:28.651 00:23:28.651 08:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:29.586 Initializing NVMe Controllers 00:23:29.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:29.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:29.586 Initialization complete. Launching workers. 00:23:29.587 ======================================================== 00:23:29.587 Latency(us) 00:23:29.587 Device Information : IOPS MiB/s Average min max 00:23:29.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10841.00 42.35 2962.76 485.88 6398.46 00:23:29.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3859.00 15.07 8331.55 6495.15 16082.11 00:23:29.587 ======================================================== 00:23:29.587 Total : 14700.00 57.42 4372.16 485.88 16082.11 00:23:29.587 00:23:29.587 08:22:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:29.587 08:22:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:29.587 08:22:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:32.120 Initializing NVMe Controllers 00:23:32.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.120 Controller IO queue size 128, less than required. 00:23:32.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.120 Controller IO queue size 128, less than required. 00:23:32.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:32.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:32.120 Initialization complete. Launching workers. 00:23:32.120 ======================================================== 00:23:32.120 Latency(us) 00:23:32.120 Device Information : IOPS MiB/s Average min max 00:23:32.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1792.26 448.06 72467.22 49282.42 112278.62 00:23:32.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.42 151.60 221867.43 85893.76 317447.92 00:23:32.120 ======================================================== 00:23:32.120 Total : 2398.68 599.67 110237.64 49282.42 317447.92 00:23:32.120 00:23:32.120 08:22:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:32.379 No valid NVMe controllers or AIO or URING devices found 00:23:32.379 Initializing NVMe Controllers 00:23:32.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.379 Controller IO queue size 128, less than required. 00:23:32.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.379 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:32.379 Controller IO queue size 128, less than required. 00:23:32.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.379 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:32.379 WARNING: Some requested NVMe devices were skipped 00:23:32.379 08:22:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:34.917 Initializing NVMe Controllers 00:23:34.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:34.917 Controller IO queue size 128, less than required. 00:23:34.917 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:34.917 Controller IO queue size 128, less than required. 00:23:34.917 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:34.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:34.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:34.917 Initialization complete. Launching workers. 00:23:34.917 00:23:34.917 ==================== 00:23:34.917 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:34.917 TCP transport: 00:23:34.917 polls: 12490 00:23:34.917 idle_polls: 8960 00:23:34.917 sock_completions: 3530 00:23:34.917 nvme_completions: 6227 00:23:34.917 submitted_requests: 9346 00:23:34.917 queued_requests: 1 00:23:34.917 00:23:34.917 ==================== 00:23:34.917 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:34.917 TCP transport: 00:23:34.917 polls: 12096 00:23:34.917 idle_polls: 7615 00:23:34.917 sock_completions: 4481 00:23:34.917 nvme_completions: 6485 00:23:34.917 submitted_requests: 9620 00:23:34.917 queued_requests: 1 00:23:34.917 ======================================================== 00:23:34.918 Latency(us) 00:23:34.918 Device Information : IOPS MiB/s Average min max 00:23:34.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1554.14 388.53 85464.50 54066.32 151045.12 00:23:34.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1618.54 404.63 79596.53 49087.44 112839.53 00:23:34.918 ======================================================== 00:23:34.918 Total : 3172.68 793.17 82470.96 49087.44 151045.12 00:23:34.918 00:23:34.918 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:34.918 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:35.177 rmmod nvme_tcp 00:23:35.177 rmmod nvme_fabrics 00:23:35.177 rmmod nvme_keyring 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1446080 ']' 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1446080 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1446080 ']' 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1446080 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1446080 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1446080' 00:23:35.177 killing process with pid 1446080 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1446080 00:23:35.177 08:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1446080 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.080 08:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.988 08:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:38.988 00:23:38.988 real 0m23.636s 00:23:38.988 user 1m2.594s 00:23:38.988 sys 0m7.901s 00:23:38.988 08:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.988 08:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:38.988 ************************************ 00:23:38.988 END TEST nvmf_perf 00:23:38.988 ************************************ 00:23:38.988 08:22:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:38.988 08:22:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:38.988 08:22:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.988 08:22:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.988 ************************************ 00:23:38.988 START TEST nvmf_fio_host 00:23:38.988 ************************************ 00:23:38.988 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:38.988 * Looking for test storage... 00:23:38.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:38.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.988 --rc genhtml_branch_coverage=1 00:23:38.988 --rc genhtml_function_coverage=1 00:23:38.988 --rc genhtml_legend=1 00:23:38.988 --rc geninfo_all_blocks=1 00:23:38.988 --rc geninfo_unexecuted_blocks=1 00:23:38.988 00:23:38.988 ' 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:38.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.988 --rc genhtml_branch_coverage=1 00:23:38.988 --rc genhtml_function_coverage=1 00:23:38.988 --rc genhtml_legend=1 00:23:38.988 --rc geninfo_all_blocks=1 00:23:38.988 --rc geninfo_unexecuted_blocks=1 00:23:38.988 00:23:38.988 ' 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:38.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.988 --rc genhtml_branch_coverage=1 00:23:38.988 --rc genhtml_function_coverage=1 00:23:38.988 --rc genhtml_legend=1 00:23:38.988 --rc geninfo_all_blocks=1 00:23:38.988 --rc geninfo_unexecuted_blocks=1 00:23:38.988 00:23:38.988 ' 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:38.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.988 --rc genhtml_branch_coverage=1 00:23:38.988 --rc genhtml_function_coverage=1 00:23:38.988 --rc genhtml_legend=1 00:23:38.988 --rc geninfo_all_blocks=1 00:23:38.988 --rc geninfo_unexecuted_blocks=1 00:23:38.988 00:23:38.988 ' 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.988 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:38.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.989 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:44.260 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:44.260 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.260 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:44.261 Found net devices under 0000:86:00.0: cvl_0_0 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:44.261 Found net devices under 0000:86:00.1: cvl_0_1 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.261 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:23:44.521 00:23:44.521 --- 10.0.0.2 ping statistics --- 00:23:44.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.521 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:23:44.521 00:23:44.521 --- 10.0.0.1 ping statistics --- 00:23:44.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.521 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1452168 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1452168 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1452168 ']' 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.521 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.521 [2024-11-28 08:22:26.695830] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:23:44.521 [2024-11-28 08:22:26.695877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.521 [2024-11-28 08:22:26.763140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.780 [2024-11-28 08:22:26.806933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.780 [2024-11-28 08:22:26.806973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.780 [2024-11-28 08:22:26.806981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.780 [2024-11-28 08:22:26.806987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.780 [2024-11-28 08:22:26.806992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.780 [2024-11-28 08:22:26.808520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.780 [2024-11-28 08:22:26.808616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.780 [2024-11-28 08:22:26.808704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.780 [2024-11-28 08:22:26.808705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.780 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.780 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:44.780 08:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:45.038 [2024-11-28 08:22:27.083121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.038 08:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:45.038 08:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.038 08:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.038 08:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:45.297 Malloc1 00:23:45.297 08:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:45.556 08:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:45.556 08:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:45.814 [2024-11-28 08:22:27.929511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.814 08:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:46.073 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:46.074 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:46.074 08:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.332 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:46.332 fio-3.35 00:23:46.332 Starting 1 thread 00:23:48.864 [2024-11-28 08:22:30.891861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1219130 is same with the state(6) to be set 00:23:48.864 00:23:48.864 test: (groupid=0, jobs=1): err= 0: pid=1452759: Thu Nov 28 08:22:30 2024 00:23:48.864 read: IOPS=11.4k, BW=44.7MiB/s (46.9MB/s)(89.6MiB/2005msec) 00:23:48.864 slat (nsec): min=1610, max=251074, avg=1788.43, stdev=2290.23 00:23:48.864 clat (usec): min=3135, max=10893, avg=6211.18, stdev=462.50 00:23:48.864 lat (usec): min=3169, max=10894, avg=6212.97, stdev=462.43 00:23:48.864 clat percentiles (usec): 00:23:48.864 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:23:48.864 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6194], 60.00th=[ 6325], 00:23:48.864 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6915], 00:23:48.864 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 8848], 99.95th=[10028], 00:23:48.864 | 99.99th=[10290] 00:23:48.864 bw ( KiB/s): min=45024, max=46448, per=99.93%, avg=45750.00, stdev=659.16, samples=4 00:23:48.864 iops : min=11256, max=11612, avg=11437.50, stdev=164.79, samples=4 00:23:48.864 write: IOPS=11.4k, BW=44.4MiB/s (46.5MB/s)(89.0MiB/2005msec); 0 zone resets 00:23:48.864 slat (nsec): min=1657, max=237204, avg=1849.44, stdev=1727.32 00:23:48.864 clat (usec): min=2439, max=9981, avg=4989.54, stdev=379.52 00:23:48.864 lat (usec): min=2454, max=9983, avg=4991.39, stdev=379.56 00:23:48.864 clat percentiles (usec): 00:23:48.864 | 1.00th=[ 4113], 5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 4686], 00:23:48.864 | 30.00th=[ 4817], 40.00th=[ 4883], 50.00th=[ 5014], 60.00th=[ 5080], 00:23:48.864 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5473], 95.00th=[ 5538], 00:23:48.864 | 99.00th=[ 5866], 99.50th=[ 5932], 99.90th=[ 6849], 99.95th=[ 7701], 00:23:48.864 | 99.99th=[ 9634] 00:23:48.864 bw ( KiB/s): min=45248, max=45760, per=100.00%, avg=45458.00, stdev=217.66, samples=4 00:23:48.864 iops : min=11312, max=11440, avg=11364.50, stdev=54.42, samples=4 00:23:48.864 lat (msec) : 4=0.33%, 10=99.64%, 20=0.03% 00:23:48.864 cpu : usr=73.85%, sys=24.80%, ctx=102, majf=0, minf=2 00:23:48.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:48.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:48.864 issued rwts: total=22949,22782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:48.864 00:23:48.864 Run status group 0 (all jobs): 00:23:48.864 READ: bw=44.7MiB/s (46.9MB/s), 44.7MiB/s-44.7MiB/s (46.9MB/s-46.9MB/s), io=89.6MiB (94.0MB), run=2005-2005msec 00:23:48.864 WRITE: bw=44.4MiB/s (46.5MB/s), 44.4MiB/s-44.4MiB/s (46.5MB/s-46.5MB/s), io=89.0MiB (93.3MB), run=2005-2005msec 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:48.864 08:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:49.122 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:49.122 fio-3.35 00:23:49.122 Starting 1 thread 00:23:51.656 00:23:51.656 test: (groupid=0, jobs=1): err= 0: pid=1453300: Thu Nov 28 08:22:33 2024 00:23:51.656 read: IOPS=10.7k, BW=167MiB/s (175MB/s)(335MiB/2007msec) 00:23:51.656 slat (nsec): min=2534, max=88187, avg=2826.89, stdev=1235.75 00:23:51.656 clat (usec): min=2068, max=13284, avg=6946.47, stdev=1644.91 00:23:51.656 lat (usec): min=2071, max=13286, avg=6949.29, stdev=1645.02 00:23:51.656 clat percentiles (usec): 00:23:51.656 | 1.00th=[ 3556], 5.00th=[ 4424], 10.00th=[ 4817], 20.00th=[ 5538], 00:23:51.656 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6915], 60.00th=[ 7439], 00:23:51.656 | 70.00th=[ 7832], 80.00th=[ 8291], 90.00th=[ 9110], 95.00th=[ 9765], 00:23:51.656 | 99.00th=[11076], 99.50th=[11600], 99.90th=[12649], 99.95th=[12911], 00:23:51.656 | 99.99th=[13304] 00:23:51.656 bw ( KiB/s): min=81248, max=94208, per=50.63%, avg=86448.00, stdev=5777.66, samples=4 00:23:51.656 iops : min= 5078, max= 5888, avg=5403.00, stdev=361.10, samples=4 00:23:51.656 write: IOPS=6253, BW=97.7MiB/s (102MB/s)(176MiB/1805msec); 0 zone resets 00:23:51.656 slat (usec): min=30, max=297, avg=31.65, stdev= 6.41 00:23:51.656 clat (usec): min=4476, max=16962, avg=8864.00, stdev=1602.13 00:23:51.656 lat (usec): min=4507, max=16993, avg=8895.65, stdev=1603.15 00:23:51.656 clat percentiles (usec): 00:23:51.656 | 1.00th=[ 5669], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7504], 00:23:51.656 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:23:51.656 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11207], 95.00th=[11731], 00:23:51.656 | 99.00th=[13042], 99.50th=[13960], 99.90th=[16450], 99.95th=[16712], 00:23:51.656 | 99.99th=[16909] 00:23:51.656 bw ( KiB/s): min=84640, max=98304, per=89.91%, avg=89960.00, stdev=6132.45, samples=4 00:23:51.656 iops : min= 5290, max= 6144, avg=5622.50, stdev=383.28, samples=4 00:23:51.656 lat (msec) : 4=1.58%, 10=88.40%, 20=10.01% 00:23:51.656 cpu : usr=85.44%, sys=13.71%, ctx=33, majf=0, minf=2 00:23:51.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:51.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:51.656 issued rwts: total=21416,11287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:51.656 00:23:51.656 Run status group 0 (all jobs): 00:23:51.656 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=335MiB (351MB), run=2007-2007msec 00:23:51.656 WRITE: bw=97.7MiB/s (102MB/s), 97.7MiB/s-97.7MiB/s (102MB/s-102MB/s), io=176MiB (185MB), run=1805-1805msec 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.656 rmmod nvme_tcp 00:23:51.656 rmmod nvme_fabrics 00:23:51.656 rmmod nvme_keyring 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1452168 ']' 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1452168 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1452168 ']' 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1452168 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1452168 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1452168' 00:23:51.656 killing process with pid 1452168 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1452168 00:23:51.656 08:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1452168 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.915 08:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.451 00:23:54.451 real 0m15.134s 00:23:54.451 user 0m45.710s 00:23:54.451 sys 0m6.093s 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.451 ************************************ 00:23:54.451 END TEST nvmf_fio_host 00:23:54.451 ************************************ 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.451 ************************************ 00:23:54.451 START TEST nvmf_failover 00:23:54.451 ************************************ 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:54.451 * Looking for test storage... 00:23:54.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:54.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.451 --rc genhtml_branch_coverage=1 00:23:54.451 --rc genhtml_function_coverage=1 00:23:54.451 --rc genhtml_legend=1 00:23:54.451 --rc geninfo_all_blocks=1 00:23:54.451 --rc geninfo_unexecuted_blocks=1 00:23:54.451 00:23:54.451 ' 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:54.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.451 --rc genhtml_branch_coverage=1 00:23:54.451 --rc genhtml_function_coverage=1 00:23:54.451 --rc genhtml_legend=1 00:23:54.451 --rc geninfo_all_blocks=1 00:23:54.451 --rc geninfo_unexecuted_blocks=1 00:23:54.451 00:23:54.451 ' 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:54.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.451 --rc genhtml_branch_coverage=1 00:23:54.451 --rc genhtml_function_coverage=1 00:23:54.451 --rc genhtml_legend=1 00:23:54.451 --rc geninfo_all_blocks=1 00:23:54.451 --rc geninfo_unexecuted_blocks=1 00:23:54.451 00:23:54.451 ' 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:54.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.451 --rc genhtml_branch_coverage=1 00:23:54.451 --rc genhtml_function_coverage=1 00:23:54.451 --rc genhtml_legend=1 00:23:54.451 --rc geninfo_all_blocks=1 00:23:54.451 --rc geninfo_unexecuted_blocks=1 00:23:54.451 00:23:54.451 ' 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.451 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.452 08:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:59.723 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:59.723 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:59.723 Found net devices under 0000:86:00.0: cvl_0_0 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:59.723 Found net devices under 0000:86:00.1: cvl_0_1 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.723 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:59.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:23:59.724 00:23:59.724 --- 10.0.0.2 ping statistics --- 00:23:59.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.724 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:23:59.724 00:23:59.724 --- 10.0.0.1 ping statistics --- 00:23:59.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.724 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1457077 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1457077 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1457077 ']' 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:59.724 [2024-11-28 08:22:41.583184] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:23:59.724 [2024-11-28 08:22:41.583232] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.724 [2024-11-28 08:22:41.648270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:59.724 [2024-11-28 08:22:41.690994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.724 [2024-11-28 08:22:41.691031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.724 [2024-11-28 08:22:41.691038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.724 [2024-11-28 08:22:41.691044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.724 [2024-11-28 08:22:41.691049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.724 [2024-11-28 08:22:41.692399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.724 [2024-11-28 08:22:41.692483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.724 [2024-11-28 08:22:41.692485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.724 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:59.983 [2024-11-28 08:22:41.995370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.983 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:59.983 Malloc0 00:24:00.241 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:00.241 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:00.500 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.760 [2024-11-28 08:22:42.826515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.760 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:01.018 [2024-11-28 08:22:43.027101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:01.018 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:01.018 [2024-11-28 08:22:43.231765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:01.018 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1457339 00:24:01.018 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:01.018 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:01.018 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1457339 /var/tmp/bdevperf.sock 00:24:01.018 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1457339 ']' 00:24:01.018 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.018 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.018 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.018 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.018 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.277 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.277 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:01.277 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:01.843 NVMe0n1 00:24:01.843 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:02.102 00:24:02.102 08:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1457536 00:24:02.102 08:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.102 08:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:03.040 08:22:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.298 [2024-11-28 08:22:45.348093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc82d0 is same with the state(6) to be set 00:24:03.298 [2024-11-28 08:22:45.348144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc82d0 is same with the state(6) to be set 00:24:03.298 [2024-11-28 08:22:45.348155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc82d0 is same with the state(6) to be set 00:24:03.298 08:22:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:06.583 08:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:06.583 00:24:06.583 08:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:06.842 08:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:10.131 08:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.131 [2024-11-28 08:22:52.136954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.131 08:22:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:11.067 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:11.326 [2024-11-28 08:22:53.352602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9ce0 is same with the state(6) to be set 00:24:11.326 [2024-11-28 08:22:53.352643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9ce0 is same with the state(6) to be set 00:24:11.326 [2024-11-28 08:22:53.352651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9ce0 is same with the state(6) to be set 00:24:11.326 [2024-11-28 08:22:53.352657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9ce0 is same with the state(6) to be set 00:24:11.326 [2024-11-28 08:22:53.352664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9ce0 is same with the state(6) to be set 00:24:11.326 [2024-11-28 08:22:53.352670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9ce0 is same with the state(6) to be set 00:24:11.326 [2024-11-28 08:22:53.352677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9ce0 is same with the state(6) to be set 00:24:11.326 [2024-11-28 08:22:53.352683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9ce0 is same with the state(6) to be set 00:24:11.326 [2024-11-28 08:22:53.352689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9ce0 is same with the state(6) to be set 00:24:11.326 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1457536 00:24:17.901 { 00:24:17.901 "results": [ 00:24:17.901 { 00:24:17.901 "job": "NVMe0n1", 00:24:17.901 "core_mask": "0x1", 00:24:17.901 "workload": "verify", 00:24:17.901 "status": "finished", 00:24:17.901 "verify_range": { 00:24:17.901 "start": 0, 00:24:17.901 "length": 16384 00:24:17.901 }, 00:24:17.901 "queue_depth": 128, 00:24:17.901 "io_size": 4096, 00:24:17.901 "runtime": 15.010813, 00:24:17.901 "iops": 10639.863410462844, 00:24:17.901 "mibps": 41.56196644712048, 00:24:17.901 "io_failed": 11533, 00:24:17.901 "io_timeout": 0, 00:24:17.901 "avg_latency_us": 11197.569970116725, 00:24:17.901 "min_latency_us": 448.7791304347826, 00:24:17.901 "max_latency_us": 18919.958260869564 00:24:17.901 } 00:24:17.901 ], 00:24:17.901 "core_count": 1 00:24:17.901 } 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1457339 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1457339 ']' 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1457339 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1457339 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1457339' 00:24:17.901 killing process with pid 1457339 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1457339 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1457339 00:24:17.901 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.901 [2024-11-28 08:22:43.308967] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:24:17.901 [2024-11-28 08:22:43.309020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457339 ] 00:24:17.901 [2024-11-28 08:22:43.371362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.901 [2024-11-28 08:22:43.413452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.901 Running I/O for 15 seconds... 00:24:17.901 10774.00 IOPS, 42.09 MiB/s [2024-11-28T07:23:00.170Z] [2024-11-28 08:22:45.349404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.901 [2024-11-28 08:22:45.349437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.901 [2024-11-28 08:22:45.349454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.901 [2024-11-28 08:22:45.349462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.901 [2024-11-28 08:22:45.349472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.901 [2024-11-28 08:22:45.349479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.901 [2024-11-28 08:22:45.349488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.901 [2024-11-28 08:22:45.349495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.901 [2024-11-28 08:22:45.349503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.901 [2024-11-28 08:22:45.349510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.901 [2024-11-28 08:22:45.349519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.901 [2024-11-28 08:22:45.349526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.901 [2024-11-28 08:22:45.349534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.901 [2024-11-28 08:22:45.349541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.901 [2024-11-28 08:22:45.349550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.901 [2024-11-28 08:22:45.349556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.901 [2024-11-28 08:22:45.349565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.901 [2024-11-28 08:22:45.349571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.349988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.349998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.350005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.350020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.350035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.902 [2024-11-28 08:22:45.350050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.902 [2024-11-28 08:22:45.350065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.902 [2024-11-28 08:22:45.350080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.902 [2024-11-28 08:22:45.350094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.902 [2024-11-28 08:22:45.350109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.902 [2024-11-28 08:22:45.350124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.902 [2024-11-28 08:22:45.350138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.902 [2024-11-28 08:22:45.350153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.902 [2024-11-28 08:22:45.350168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.902 [2024-11-28 08:22:45.350176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.903 [2024-11-28 08:22:45.350182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.903 [2024-11-28 08:22:45.350202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.903 [2024-11-28 08:22:45.350217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.903 [2024-11-28 08:22:45.350231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.903 [2024-11-28 08:22:45.350247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.903 [2024-11-28 08:22:45.350261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.903 [2024-11-28 08:22:45.350276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.903 [2024-11-28 08:22:45.350291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.903 [2024-11-28 08:22:45.350769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.903 [2024-11-28 08:22:45.350789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.350797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94624 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.350804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.350813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.350818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.350824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94632 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.350830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.350837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.350842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.350848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94640 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.350854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.350860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.350865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.350871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94648 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.350877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.350884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.350888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.350895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94656 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.350901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.350908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.350913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.350919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94664 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.350925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.350933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.350938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.350943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94672 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.350955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.350964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.350970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.350975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94680 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.350982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.350988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.350993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.350999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94688 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94696 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94704 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94712 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94720 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94728 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94736 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94744 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94752 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94760 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94768 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94776 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94784 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94792 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94800 len:8 PRP1 0x0 PRP2 0x0 00:24:17.904 [2024-11-28 08:22:45.351341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.904 [2024-11-28 08:22:45.351348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.904 [2024-11-28 08:22:45.351353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.904 [2024-11-28 08:22:45.351358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94808 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94816 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94824 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94832 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94840 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94848 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94856 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94864 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94872 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94880 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94888 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94896 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94904 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94912 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94920 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94928 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.905 [2024-11-28 08:22:45.351734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.905 [2024-11-28 08:22:45.351740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94936 len:8 PRP1 0x0 PRP2 0x0 00:24:17.905 [2024-11-28 08:22:45.351746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351789] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:17.905 [2024-11-28 08:22:45.351811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.905 [2024-11-28 08:22:45.351819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.905 [2024-11-28 08:22:45.351833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.905 [2024-11-28 08:22:45.351847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.905 [2024-11-28 08:22:45.351861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:45.351873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:17.905 [2024-11-28 08:22:45.354761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:17.905 [2024-11-28 08:22:45.354789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1716370 (9): Bad file descriptor 00:24:17.905 [2024-11-28 08:22:45.498394] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:17.905 9977.50 IOPS, 38.97 MiB/s [2024-11-28T07:23:00.174Z] 10272.33 IOPS, 40.13 MiB/s [2024-11-28T07:23:00.174Z] 10452.25 IOPS, 40.83 MiB/s [2024-11-28T07:23:00.174Z] [2024-11-28 08:22:48.925197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.905 [2024-11-28 08:22:48.925244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.905 [2024-11-28 08:22:48.925259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.905 [2024-11-28 08:22:48.925267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.906 [2024-11-28 08:22:48.925725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.906 [2024-11-28 08:22:48.925740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.906 [2024-11-28 08:22:48.925756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.906 [2024-11-28 08:22:48.925770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.906 [2024-11-28 08:22:48.925785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.906 [2024-11-28 08:22:48.925799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.906 [2024-11-28 08:22:48.925813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.906 [2024-11-28 08:22:48.925828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.906 [2024-11-28 08:22:48.925836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.906 [2024-11-28 08:22:48.925842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.925850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.925859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.925867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.925874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.925882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.925889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.925897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.925903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.925911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.925918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.925926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.925932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.925940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.925951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.925960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.925967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.925975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.925982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.925990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.925997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.907 [2024-11-28 08:22:48.926426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.907 [2024-11-28 08:22:48.926434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.908 [2024-11-28 08:22:48.926441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52736 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52744 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52752 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52760 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52768 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52776 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52784 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52792 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52800 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52808 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52816 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52824 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52832 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52840 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52848 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52856 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52864 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52872 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52880 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52888 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52896 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.926979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.926986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.926991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.926997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52904 len:8 PRP1 0x0 PRP2 0x0 00:24:17.908 [2024-11-28 08:22:48.927003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.908 [2024-11-28 08:22:48.927010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.908 [2024-11-28 08:22:48.927015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.908 [2024-11-28 08:22:48.927021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52912 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52920 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52928 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52936 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52944 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52952 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52960 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52968 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52976 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52984 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52992 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53000 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53008 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53016 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53024 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53032 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53040 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53048 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53056 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53064 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53072 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53080 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.909 [2024-11-28 08:22:48.927554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53088 len:8 PRP1 0x0 PRP2 0x0 00:24:17.909 [2024-11-28 08:22:48.927560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.909 [2024-11-28 08:22:48.927567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.909 [2024-11-28 08:22:48.927572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.910 [2024-11-28 08:22:48.927577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53096 len:8 PRP1 0x0 PRP2 0x0 00:24:17.910 [2024-11-28 08:22:48.927585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:48.927592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.910 [2024-11-28 08:22:48.927597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.910 [2024-11-28 08:22:48.927602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52336 len:8 PRP1 0x0 PRP2 0x0 00:24:17.910 [2024-11-28 08:22:48.927608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:48.927615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.910 [2024-11-28 08:22:48.927620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.910 [2024-11-28 08:22:48.927625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52344 len:8 PRP1 0x0 PRP2 0x0 00:24:17.910 [2024-11-28 08:22:48.927631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:48.927673] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:17.910 [2024-11-28 08:22:48.927696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.910 [2024-11-28 08:22:48.927704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:48.927712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.910 [2024-11-28 08:22:48.927719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:48.927726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.910 [2024-11-28 08:22:48.927732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:48.927739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.910 [2024-11-28 08:22:48.927746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:48.927752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:17.910 [2024-11-28 08:22:48.927775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1716370 (9): Bad file descriptor 00:24:17.910 [2024-11-28 08:22:48.930639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:17.910 [2024-11-28 08:22:48.956649] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:17.910 10415.00 IOPS, 40.68 MiB/s [2024-11-28T07:23:00.179Z] 10474.67 IOPS, 40.92 MiB/s [2024-11-28T07:23:00.179Z] 10507.57 IOPS, 41.05 MiB/s [2024-11-28T07:23:00.179Z] 10558.25 IOPS, 41.24 MiB/s [2024-11-28T07:23:00.179Z] 10593.44 IOPS, 41.38 MiB/s [2024-11-28T07:23:00.179Z] [2024-11-28 08:22:53.355049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.910 [2024-11-28 08:22:53.355443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.910 [2024-11-28 08:22:53.355450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.355990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.355999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.356005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.356013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.356020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.356028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.356034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.911 [2024-11-28 08:22:53.356043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.911 [2024-11-28 08:22:53.356049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.912 [2024-11-28 08:22:53.356168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.912 [2024-11-28 08:22:53.356184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.912 [2024-11-28 08:22:53.356544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.912 [2024-11-28 08:22:53.356571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50128 len:8 PRP1 0x0 PRP2 0x0 00:24:17.912 [2024-11-28 08:22:53.356577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.912 [2024-11-28 08:22:53.356623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.912 [2024-11-28 08:22:53.356637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.912 [2024-11-28 08:22:53.356652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.912 [2024-11-28 08:22:53.356666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.912 [2024-11-28 08:22:53.356672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1716370 is same with the state(6) to be set 00:24:17.912 [2024-11-28 08:22:53.356838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.912 [2024-11-28 08:22:53.356847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.912 [2024-11-28 08:22:53.356853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50136 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.356860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.356869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.356874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.356880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50144 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.356886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.356893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.356898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.356904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50152 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.356910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.356917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.356922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.356927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50160 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.356933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.356940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.356945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.356956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50168 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.356963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.356969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.356974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.356980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50176 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.356986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.356993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.356998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50184 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50192 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50200 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50208 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50216 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50224 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50232 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49240 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49248 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49256 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49264 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49272 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49280 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49288 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49296 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49304 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49312 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.913 [2024-11-28 08:22:53.357404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.913 [2024-11-28 08:22:53.357411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49320 len:8 PRP1 0x0 PRP2 0x0 00:24:17.913 [2024-11-28 08:22:53.357418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.913 [2024-11-28 08:22:53.357425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49328 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49336 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49344 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49352 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50240 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49360 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49368 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49376 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49384 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49392 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49400 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49408 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49416 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49424 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49432 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49440 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49448 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49456 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49464 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49472 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49480 len:8 PRP1 0x0 PRP2 0x0 00:24:17.914 [2024-11-28 08:22:53.357918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.914 [2024-11-28 08:22:53.357925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.914 [2024-11-28 08:22:53.357930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.914 [2024-11-28 08:22:53.357935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49488 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.357941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.357951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.357956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.357964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49496 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.357972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.357978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.357984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.357989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49504 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.357995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.358002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.358007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.358013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49512 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.358019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.358026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.358031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.358037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49520 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.358044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49528 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49536 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49544 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49552 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49560 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49568 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49576 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49584 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49592 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49600 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49608 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49616 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49624 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49632 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49640 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49648 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49656 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49664 len:8 PRP1 0x0 PRP2 0x0 00:24:17.915 [2024-11-28 08:22:53.362944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.915 [2024-11-28 08:22:53.362953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.915 [2024-11-28 08:22:53.362959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.915 [2024-11-28 08:22:53.362964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49672 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.362970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.362977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.362982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.362987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49680 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.362995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49688 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49696 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49704 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49712 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49720 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49728 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49736 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49744 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49752 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49760 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49768 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49776 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49784 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49792 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49800 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49808 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49816 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49824 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49832 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49840 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49848 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.916 [2024-11-28 08:22:53.363496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.916 [2024-11-28 08:22:53.363501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49856 len:8 PRP1 0x0 PRP2 0x0 00:24:17.916 [2024-11-28 08:22:53.363507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.916 [2024-11-28 08:22:53.363514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49864 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49872 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49880 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49888 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49896 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49904 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49912 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49920 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49928 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49224 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49232 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49936 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49944 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49952 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49960 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49968 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49976 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49984 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49992 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.363981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50000 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.363988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.363994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.363999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.364005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50008 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.364011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.364018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.364023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.364028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50016 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.364035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.364041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.364046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.917 [2024-11-28 08:22:53.364052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50024 len:8 PRP1 0x0 PRP2 0x0 00:24:17.917 [2024-11-28 08:22:53.364058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.917 [2024-11-28 08:22:53.364064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.917 [2024-11-28 08:22:53.364069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.364075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50032 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.364081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.364089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.364094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.364099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50040 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.364106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.364112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.364117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.364123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50048 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.364129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.364136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.364141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.364146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50056 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.364154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.364161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.364166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.364178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50064 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.364185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.364191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.364196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.364202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50072 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.364208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.364215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.364220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.364225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50080 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.368672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.368681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.368687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.368694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50088 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.368701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.368707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.368712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.368717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50096 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.368725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.368732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.368737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.368743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50104 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.368749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.368756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.368760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.368766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50112 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.368772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.368779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.368784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.368789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50120 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.368796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.368803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.918 [2024-11-28 08:22:53.368807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.918 [2024-11-28 08:22:53.368813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50128 len:8 PRP1 0x0 PRP2 0x0 00:24:17.918 [2024-11-28 08:22:53.368819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.918 [2024-11-28 08:22:53.368863] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:17.918 [2024-11-28 08:22:53.368872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:17.918 [2024-11-28 08:22:53.368904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1716370 (9): Bad file descriptor 00:24:17.918 [2024-11-28 08:22:53.372655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:17.918 [2024-11-28 08:22:53.441762] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:17.918 10527.70 IOPS, 41.12 MiB/s [2024-11-28T07:23:00.187Z] 10567.27 IOPS, 41.28 MiB/s [2024-11-28T07:23:00.187Z] 10590.50 IOPS, 41.37 MiB/s [2024-11-28T07:23:00.187Z] 10617.62 IOPS, 41.48 MiB/s [2024-11-28T07:23:00.187Z] 10631.79 IOPS, 41.53 MiB/s [2024-11-28T07:23:00.187Z] 10639.13 IOPS, 41.56 MiB/s 00:24:17.918 Latency(us) 00:24:17.918 [2024-11-28T07:23:00.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.918 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:17.918 Verification LBA range: start 0x0 length 0x4000 00:24:17.918 NVMe0n1 : 15.01 10639.86 41.56 768.31 0.00 11197.57 448.78 18919.96 00:24:17.918 [2024-11-28T07:23:00.187Z] =================================================================================================================== 00:24:17.918 [2024-11-28T07:23:00.187Z] Total : 10639.86 41.56 768.31 0.00 11197.57 448.78 18919.96 00:24:17.918 Received shutdown signal, test time was about 15.000000 seconds 00:24:17.918 00:24:17.918 Latency(us) 00:24:17.918 [2024-11-28T07:23:00.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.918 [2024-11-28T07:23:00.187Z] =================================================================================================================== 00:24:17.918 [2024-11-28T07:23:00.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1459952 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1459952 /var/tmp/bdevperf.sock 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1459952 ']' 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:17.918 08:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:17.918 [2024-11-28 08:22:59.977823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:17.918 08:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:18.176 [2024-11-28 08:23:00.182459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:18.176 08:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:18.434 NVMe0n1 00:24:18.434 08:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:18.693 00:24:18.693 08:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:18.951 00:24:18.951 08:23:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:19.209 08:23:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:19.209 08:23:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:19.468 08:23:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:22.752 08:23:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:22.752 08:23:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:22.752 08:23:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:22.752 08:23:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1460798 00:24:22.752 08:23:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1460798 00:24:23.716 { 00:24:23.716 "results": [ 00:24:23.716 { 00:24:23.716 "job": "NVMe0n1", 00:24:23.716 "core_mask": "0x1", 00:24:23.716 "workload": "verify", 00:24:23.716 "status": "finished", 00:24:23.716 "verify_range": { 00:24:23.716 "start": 0, 00:24:23.716 "length": 16384 00:24:23.716 }, 00:24:23.716 "queue_depth": 128, 00:24:23.716 "io_size": 4096, 00:24:23.716 "runtime": 1.007988, 00:24:23.716 "iops": 10715.40534212709, 00:24:23.716 "mibps": 41.85705211768394, 00:24:23.716 "io_failed": 0, 00:24:23.716 "io_timeout": 0, 00:24:23.716 "avg_latency_us": 11898.53878070871, 00:24:23.716 "min_latency_us": 2678.4278260869564, 00:24:23.716 "max_latency_us": 9118.052173913044 00:24:23.716 } 00:24:23.716 ], 00:24:23.716 "core_count": 1 00:24:23.716 } 00:24:23.716 08:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:23.716 [2024-11-28 08:22:59.605134] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:24:23.716 [2024-11-28 08:22:59.605186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459952 ] 00:24:23.716 [2024-11-28 08:22:59.669358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.716 [2024-11-28 08:22:59.707103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.716 [2024-11-28 08:23:01.567910] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:23.716 [2024-11-28 08:23:01.567959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.716 [2024-11-28 08:23:01.567971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.716 [2024-11-28 08:23:01.567979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.716 [2024-11-28 08:23:01.567987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.716 [2024-11-28 08:23:01.567995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.716 [2024-11-28 08:23:01.568002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.716 [2024-11-28 08:23:01.568010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.716 [2024-11-28 08:23:01.568017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.716 [2024-11-28 08:23:01.568024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:23.716 [2024-11-28 08:23:01.568049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:23.716 [2024-11-28 08:23:01.568062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e8370 (9): Bad file descriptor 00:24:23.716 [2024-11-28 08:23:01.620106] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:23.716 Running I/O for 1 seconds... 00:24:23.716 10673.00 IOPS, 41.69 MiB/s 00:24:23.716 Latency(us) 00:24:23.716 [2024-11-28T07:23:05.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.716 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:23.716 Verification LBA range: start 0x0 length 0x4000 00:24:23.716 NVMe0n1 : 1.01 10715.41 41.86 0.00 0.00 11898.54 2678.43 9118.05 00:24:23.716 [2024-11-28T07:23:05.985Z] =================================================================================================================== 00:24:23.716 [2024-11-28T07:23:05.985Z] Total : 10715.41 41.86 0.00 0.00 11898.54 2678.43 9118.05 00:24:23.716 08:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:23.716 08:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:23.974 08:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:24.233 08:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.233 08:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:24.491 08:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:24.491 08:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1459952 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1459952 ']' 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1459952 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1459952 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1459952' 00:24:27.773 killing process with pid 1459952 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1459952 00:24:27.773 08:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1459952 00:24:28.032 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:28.032 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.291 rmmod nvme_tcp 00:24:28.291 rmmod nvme_fabrics 00:24:28.291 rmmod nvme_keyring 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1457077 ']' 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1457077 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1457077 ']' 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1457077 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1457077 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1457077' 00:24:28.291 killing process with pid 1457077 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1457077 00:24:28.291 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1457077 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.551 08:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.088 00:24:31.088 real 0m36.549s 00:24:31.088 user 1m57.837s 00:24:31.088 sys 0m7.464s 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:31.088 ************************************ 00:24:31.088 END TEST nvmf_failover 00:24:31.088 ************************************ 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.088 ************************************ 00:24:31.088 START TEST nvmf_host_discovery 00:24:31.088 ************************************ 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:31.088 * Looking for test storage... 00:24:31.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.088 --rc genhtml_branch_coverage=1 00:24:31.088 --rc genhtml_function_coverage=1 00:24:31.088 --rc genhtml_legend=1 00:24:31.088 --rc geninfo_all_blocks=1 00:24:31.088 --rc geninfo_unexecuted_blocks=1 00:24:31.088 00:24:31.088 ' 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.088 --rc genhtml_branch_coverage=1 00:24:31.088 --rc genhtml_function_coverage=1 00:24:31.088 --rc genhtml_legend=1 00:24:31.088 --rc geninfo_all_blocks=1 00:24:31.088 --rc geninfo_unexecuted_blocks=1 00:24:31.088 00:24:31.088 ' 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.088 --rc genhtml_branch_coverage=1 00:24:31.088 --rc genhtml_function_coverage=1 00:24:31.088 --rc genhtml_legend=1 00:24:31.088 --rc geninfo_all_blocks=1 00:24:31.088 --rc geninfo_unexecuted_blocks=1 00:24:31.088 00:24:31.088 ' 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.088 --rc genhtml_branch_coverage=1 00:24:31.088 --rc genhtml_function_coverage=1 00:24:31.088 --rc genhtml_legend=1 00:24:31.088 --rc geninfo_all_blocks=1 00:24:31.088 --rc geninfo_unexecuted_blocks=1 00:24:31.088 00:24:31.088 ' 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.088 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.089 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:36.363 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:36.363 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:36.363 Found net devices under 0000:86:00.0: cvl_0_0 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:36.363 Found net devices under 0000:86:00.1: cvl_0_1 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.363 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:36.364 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:36.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:24:36.364 00:24:36.364 --- 10.0.0.2 ping statistics --- 00:24:36.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.364 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:24:36.364 00:24:36.364 --- 10.0.0.1 ping statistics --- 00:24:36.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.364 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1465157 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1465157 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1465157 ']' 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.364 [2024-11-28 08:23:18.131517] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:24:36.364 [2024-11-28 08:23:18.131562] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.364 [2024-11-28 08:23:18.198233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.364 [2024-11-28 08:23:18.239503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.364 [2024-11-28 08:23:18.239543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.364 [2024-11-28 08:23:18.239550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.364 [2024-11-28 08:23:18.239556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.364 [2024-11-28 08:23:18.239561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.364 [2024-11-28 08:23:18.240164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.364 [2024-11-28 08:23:18.375818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.364 [2024-11-28 08:23:18.388003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.364 null0 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.364 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.364 null1 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1465252 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1465252 /tmp/host.sock 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1465252 ']' 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:36.365 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.365 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.365 [2024-11-28 08:23:18.464063] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:24:36.365 [2024-11-28 08:23:18.464106] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465252 ] 00:24:36.365 [2024-11-28 08:23:18.525252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.365 [2024-11-28 08:23:18.568709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.624 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.624 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.625 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.885 [2024-11-28 08:23:18.977506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.885 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.885 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:37.144 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.144 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:37.144 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:37.712 [2024-11-28 08:23:19.693266] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:37.712 [2024-11-28 08:23:19.693286] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:37.712 [2024-11-28 08:23:19.693297] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:37.712 [2024-11-28 08:23:19.779552] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:37.712 [2024-11-28 08:23:19.834178] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:37.712 [2024-11-28 08:23:19.834972] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1eb6e30:1 started. 00:24:37.712 [2024-11-28 08:23:19.836363] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:37.712 [2024-11-28 08:23:19.836379] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:37.712 [2024-11-28 08:23:19.842355] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1eb6e30 was disconnected and freed. delete nvme_qpair. 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.000 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.259 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.260 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:38.519 [2024-11-28 08:23:20.641775] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1eb72f0:1 started. 00:24:38.519 [2024-11-28 08:23:20.644310] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1eb72f0 was disconnected and freed. delete nvme_qpair. 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.519 [2024-11-28 08:23:20.710228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:38.519 [2024-11-28 08:23:20.711133] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:38.519 [2024-11-28 08:23:20.711153] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.519 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.778 [2024-11-28 08:23:20.837878] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:38.778 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:38.778 [2024-11-28 08:23:20.979742] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:38.778 [2024-11-28 08:23:20.979779] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:38.778 [2024-11-28 08:23:20.979788] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:38.778 [2024-11-28 08:23:20.979792] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.716 [2024-11-28 08:23:21.962518] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:39.716 [2024-11-28 08:23:21.962540] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:39.716 [2024-11-28 08:23:21.966221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.716 [2024-11-28 08:23:21.966239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.716 [2024-11-28 08:23:21.966248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.716 [2024-11-28 08:23:21.966260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.716 [2024-11-28 08:23:21.966268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.716 [2024-11-28 08:23:21.966275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.716 [2024-11-28 08:23:21.966283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.716 [2024-11-28 08:23:21.966289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.716 [2024-11-28 08:23:21.966297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87390 is same with the state(6) to be set 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:39.716 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:39.717 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:39.717 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:39.717 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.717 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:39.717 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:39.717 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.717 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:39.717 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.717 [2024-11-28 08:23:21.976235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87390 (9): Bad file descriptor 00:24:39.978 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.978 [2024-11-28 08:23:21.986269] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:39.978 [2024-11-28 08:23:21.986280] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:39.978 [2024-11-28 08:23:21.986285] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:39.978 [2024-11-28 08:23:21.986291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:39.978 [2024-11-28 08:23:21.986307] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:39.978 [2024-11-28 08:23:21.986438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.978 [2024-11-28 08:23:21.986453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e87390 with addr=10.0.0.2, port=4420 00:24:39.978 [2024-11-28 08:23:21.986461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87390 is same with the state(6) to be set 00:24:39.978 [2024-11-28 08:23:21.986473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87390 (9): Bad file descriptor 00:24:39.978 [2024-11-28 08:23:21.986484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:39.978 [2024-11-28 08:23:21.986491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:39.978 [2024-11-28 08:23:21.986501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:39.978 [2024-11-28 08:23:21.986508] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:39.978 [2024-11-28 08:23:21.986513] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:39.978 [2024-11-28 08:23:21.986518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:39.978 [2024-11-28 08:23:21.996338] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:39.978 [2024-11-28 08:23:21.996350] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:39.978 [2024-11-28 08:23:21.996354] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:39.978 [2024-11-28 08:23:21.996358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:39.978 [2024-11-28 08:23:21.996371] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:39.978 [2024-11-28 08:23:21.996527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.978 [2024-11-28 08:23:21.996539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e87390 with addr=10.0.0.2, port=4420 00:24:39.978 [2024-11-28 08:23:21.996546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87390 is same with the state(6) to be set 00:24:39.978 [2024-11-28 08:23:21.996557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87390 (9): Bad file descriptor 00:24:39.978 [2024-11-28 08:23:21.996567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:39.978 [2024-11-28 08:23:21.996573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:39.978 [2024-11-28 08:23:21.996580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:39.978 [2024-11-28 08:23:21.996585] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:39.978 [2024-11-28 08:23:21.996590] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:39.978 [2024-11-28 08:23:21.996594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:39.978 [2024-11-28 08:23:22.006402] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:39.978 [2024-11-28 08:23:22.006415] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:39.978 [2024-11-28 08:23:22.006419] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:39.978 [2024-11-28 08:23:22.006423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:39.978 [2024-11-28 08:23:22.006437] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:39.978 [2024-11-28 08:23:22.006692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.978 [2024-11-28 08:23:22.006705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e87390 with addr=10.0.0.2, port=4420 00:24:39.978 [2024-11-28 08:23:22.006712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87390 is same with the state(6) to be set 00:24:39.978 [2024-11-28 08:23:22.006723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87390 (9): Bad file descriptor 00:24:39.978 [2024-11-28 08:23:22.006773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:39.978 [2024-11-28 08:23:22.006786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:39.978 [2024-11-28 08:23:22.006793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:39.978 [2024-11-28 08:23:22.006799] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:39.978 [2024-11-28 08:23:22.006803] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:39.978 [2024-11-28 08:23:22.006807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:39.978 [2024-11-28 08:23:22.016469] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:39.978 [2024-11-28 08:23:22.016484] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:39.978 [2024-11-28 08:23:22.016488] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:39.978 [2024-11-28 08:23:22.016492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:39.978 [2024-11-28 08:23:22.016505] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:39.978 [2024-11-28 08:23:22.016723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.978 [2024-11-28 08:23:22.016735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e87390 with addr=10.0.0.2, port=4420 00:24:39.978 [2024-11-28 08:23:22.016743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87390 is same with the state(6) to be set 00:24:39.978 [2024-11-28 08:23:22.016754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87390 (9): Bad file descriptor 00:24:39.978 [2024-11-28 08:23:22.016764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:39.978 [2024-11-28 08:23:22.016770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:39.978 [2024-11-28 08:23:22.016777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:39.978 [2024-11-28 08:23:22.016783] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:39.978 [2024-11-28 08:23:22.016787] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:39.978 [2024-11-28 08:23:22.016791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.978 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.978 [2024-11-28 08:23:22.026537] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:39.978 [2024-11-28 08:23:22.026551] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:39.978 [2024-11-28 08:23:22.026556] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:39.978 [2024-11-28 08:23:22.026561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:39.978 [2024-11-28 08:23:22.026576] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:39.978 [2024-11-28 08:23:22.026833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.978 [2024-11-28 08:23:22.026846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e87390 with addr=10.0.0.2, port=4420 00:24:39.978 [2024-11-28 08:23:22.026855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87390 is same with the state(6) to be set 00:24:39.978 [2024-11-28 08:23:22.026866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87390 (9): Bad file descriptor 00:24:39.978 [2024-11-28 08:23:22.026892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:39.979 [2024-11-28 08:23:22.026899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:39.979 [2024-11-28 08:23:22.026907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:39.979 [2024-11-28 08:23:22.026912] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:39.979 [2024-11-28 08:23:22.026917] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:39.979 [2024-11-28 08:23:22.026921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:39.979 [2024-11-28 08:23:22.036606] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:39.979 [2024-11-28 08:23:22.036616] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:39.979 [2024-11-28 08:23:22.036620] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:39.979 [2024-11-28 08:23:22.036624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:39.979 [2024-11-28 08:23:22.036637] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:39.979 [2024-11-28 08:23:22.036747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.979 [2024-11-28 08:23:22.036759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e87390 with addr=10.0.0.2, port=4420 00:24:39.979 [2024-11-28 08:23:22.036766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87390 is same with the state(6) to be set 00:24:39.979 [2024-11-28 08:23:22.036776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87390 (9): Bad file descriptor 00:24:39.979 [2024-11-28 08:23:22.036786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:39.979 [2024-11-28 08:23:22.036792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:39.979 [2024-11-28 08:23:22.036798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:39.979 [2024-11-28 08:23:22.036807] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:39.979 [2024-11-28 08:23:22.036812] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:39.979 [2024-11-28 08:23:22.036815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:39.979 [2024-11-28 08:23:22.046668] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:39.979 [2024-11-28 08:23:22.046682] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:39.979 [2024-11-28 08:23:22.046687] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:39.979 [2024-11-28 08:23:22.046691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:39.979 [2024-11-28 08:23:22.046706] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:39.979 [2024-11-28 08:23:22.046888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.979 [2024-11-28 08:23:22.046901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e87390 with addr=10.0.0.2, port=4420 00:24:39.979 [2024-11-28 08:23:22.046909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e87390 is same with the state(6) to be set 00:24:39.979 [2024-11-28 08:23:22.046919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87390 (9): Bad file descriptor 00:24:39.979 [2024-11-28 08:23:22.046929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:39.979 [2024-11-28 08:23:22.046935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:39.979 [2024-11-28 08:23:22.046943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:39.979 [2024-11-28 08:23:22.046953] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:39.979 [2024-11-28 08:23:22.046958] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:39.979 [2024-11-28 08:23:22.046962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:39.979 [2024-11-28 08:23:22.048506] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:39.979 [2024-11-28 08:23:22.048520] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:39.979 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.980 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.980 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.980 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.980 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.980 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.980 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.174 [2024-11-28 08:23:23.353099] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:41.174 [2024-11-28 08:23:23.353116] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:41.174 [2024-11-28 08:23:23.353127] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:41.433 [2024-11-28 08:23:23.441396] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:41.433 [2024-11-28 08:23:23.506042] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:41.433 [2024-11-28 08:23:23.506631] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1ec1570:1 started. 00:24:41.433 [2024-11-28 08:23:23.508217] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:41.433 [2024-11-28 08:23:23.508242] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.433 request: 00:24:41.433 { 00:24:41.433 "name": "nvme", 00:24:41.433 "trtype": "tcp", 00:24:41.433 "traddr": "10.0.0.2", 00:24:41.433 "adrfam": "ipv4", 00:24:41.433 "trsvcid": "8009", 00:24:41.433 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:41.433 "wait_for_attach": true, 00:24:41.433 "method": "bdev_nvme_start_discovery", 00:24:41.433 "req_id": 1 00:24:41.433 } 00:24:41.433 Got JSON-RPC error response 00:24:41.433 response: 00:24:41.433 { 00:24:41.433 "code": -17, 00:24:41.433 "message": "File exists" 00:24:41.433 } 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.433 [2024-11-28 08:23:23.552589] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1ec1570 was disconnected and freed. delete nvme_qpair. 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.433 request: 00:24:41.433 { 00:24:41.433 "name": "nvme_second", 00:24:41.433 "trtype": "tcp", 00:24:41.433 "traddr": "10.0.0.2", 00:24:41.433 "adrfam": "ipv4", 00:24:41.433 "trsvcid": "8009", 00:24:41.433 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:41.433 "wait_for_attach": true, 00:24:41.433 "method": "bdev_nvme_start_discovery", 00:24:41.433 "req_id": 1 00:24:41.433 } 00:24:41.433 Got JSON-RPC error response 00:24:41.433 response: 00:24:41.433 { 00:24:41.433 "code": -17, 00:24:41.433 "message": "File exists" 00:24:41.433 } 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.433 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.434 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.434 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.692 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.629 [2024-11-28 08:23:24.747935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.629 [2024-11-28 08:23:24.747965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e860a0 with addr=10.0.0.2, port=8010 00:24:42.629 [2024-11-28 08:23:24.747994] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:42.629 [2024-11-28 08:23:24.748000] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:42.629 [2024-11-28 08:23:24.748006] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:43.564 [2024-11-28 08:23:25.750370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.564 [2024-11-28 08:23:25.750395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e860a0 with addr=10.0.0.2, port=8010 00:24:43.564 [2024-11-28 08:23:25.750409] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:43.564 [2024-11-28 08:23:25.750415] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:43.564 [2024-11-28 08:23:25.750421] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:44.501 [2024-11-28 08:23:26.752559] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:44.501 request: 00:24:44.501 { 00:24:44.501 "name": "nvme_second", 00:24:44.501 "trtype": "tcp", 00:24:44.501 "traddr": "10.0.0.2", 00:24:44.501 "adrfam": "ipv4", 00:24:44.501 "trsvcid": "8010", 00:24:44.501 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:44.501 "wait_for_attach": false, 00:24:44.501 "attach_timeout_ms": 3000, 00:24:44.501 "method": "bdev_nvme_start_discovery", 00:24:44.501 "req_id": 1 00:24:44.501 } 00:24:44.501 Got JSON-RPC error response 00:24:44.501 response: 00:24:44.501 { 00:24:44.501 "code": -110, 00:24:44.501 "message": "Connection timed out" 00:24:44.501 } 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.501 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:44.760 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1465252 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:44.761 rmmod nvme_tcp 00:24:44.761 rmmod nvme_fabrics 00:24:44.761 rmmod nvme_keyring 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1465157 ']' 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1465157 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1465157 ']' 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1465157 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1465157 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1465157' 00:24:44.761 killing process with pid 1465157 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1465157 00:24:44.761 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1465157 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.021 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.926 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:46.926 00:24:46.926 real 0m16.372s 00:24:46.926 user 0m20.183s 00:24:46.926 sys 0m5.267s 00:24:46.926 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.926 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.926 ************************************ 00:24:46.926 END TEST nvmf_host_discovery 00:24:46.926 ************************************ 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.186 ************************************ 00:24:47.186 START TEST nvmf_host_multipath_status 00:24:47.186 ************************************ 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:47.186 * Looking for test storage... 00:24:47.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.186 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:47.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.187 --rc genhtml_branch_coverage=1 00:24:47.187 --rc genhtml_function_coverage=1 00:24:47.187 --rc genhtml_legend=1 00:24:47.187 --rc geninfo_all_blocks=1 00:24:47.187 --rc geninfo_unexecuted_blocks=1 00:24:47.187 00:24:47.187 ' 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:47.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.187 --rc genhtml_branch_coverage=1 00:24:47.187 --rc genhtml_function_coverage=1 00:24:47.187 --rc genhtml_legend=1 00:24:47.187 --rc geninfo_all_blocks=1 00:24:47.187 --rc geninfo_unexecuted_blocks=1 00:24:47.187 00:24:47.187 ' 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:47.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.187 --rc genhtml_branch_coverage=1 00:24:47.187 --rc genhtml_function_coverage=1 00:24:47.187 --rc genhtml_legend=1 00:24:47.187 --rc geninfo_all_blocks=1 00:24:47.187 --rc geninfo_unexecuted_blocks=1 00:24:47.187 00:24:47.187 ' 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:47.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.187 --rc genhtml_branch_coverage=1 00:24:47.187 --rc genhtml_function_coverage=1 00:24:47.187 --rc genhtml_legend=1 00:24:47.187 --rc geninfo_all_blocks=1 00:24:47.187 --rc geninfo_unexecuted_blocks=1 00:24:47.187 00:24:47.187 ' 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.187 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.188 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.448 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:47.448 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:47.448 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:47.448 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:52.722 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.722 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:52.722 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:52.722 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:52.722 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:52.722 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:52.722 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:52.723 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:52.723 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:52.723 Found net devices under 0000:86:00.0: cvl_0_0 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:52.723 Found net devices under 0000:86:00.1: cvl_0_1 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:52.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:24:52.723 00:24:52.723 --- 10.0.0.2 ping statistics --- 00:24:52.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.723 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:24:52.723 00:24:52.723 --- 10.0.0.1 ping statistics --- 00:24:52.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.723 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:52.723 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1470123 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1470123 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1470123 ']' 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.724 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:52.724 [2024-11-28 08:23:34.940310] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:24:52.724 [2024-11-28 08:23:34.940355] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.994 [2024-11-28 08:23:35.007738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:52.994 [2024-11-28 08:23:35.047585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.994 [2024-11-28 08:23:35.047627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.994 [2024-11-28 08:23:35.047635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.994 [2024-11-28 08:23:35.047642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.994 [2024-11-28 08:23:35.047648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.994 [2024-11-28 08:23:35.048845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.994 [2024-11-28 08:23:35.048849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.994 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:52.994 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:52.994 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:52.994 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:52.994 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:52.995 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.995 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1470123 00:24:52.995 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:53.256 [2024-11-28 08:23:35.350480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.256 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:53.514 Malloc0 00:24:53.514 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:53.514 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:53.773 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.031 [2024-11-28 08:23:36.107664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.031 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:54.031 [2024-11-28 08:23:36.292087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1470388 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1470388 /var/tmp/bdevperf.sock 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1470388 ']' 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:54.290 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:54.550 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:55.117 Nvme0n1 00:24:55.117 08:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:55.685 Nvme0n1 00:24:55.685 08:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:55.685 08:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:57.592 08:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:57.592 08:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:57.852 08:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:57.852 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:59.231 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:59.231 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:59.231 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.231 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:59.231 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.231 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:59.231 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:59.231 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.490 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.490 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:59.490 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:59.490 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.490 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.490 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:59.490 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.490 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:59.749 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.749 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:59.749 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.749 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:00.008 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.008 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:00.008 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.008 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:00.267 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.267 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:00.267 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:00.526 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:00.526 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:01.901 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:01.901 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:01.901 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.901 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:01.901 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:01.901 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:01.901 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.901 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.160 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.160 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.160 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.160 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:02.160 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.160 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:02.160 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.160 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:02.419 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.419 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:02.419 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.419 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:02.678 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.678 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:02.678 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.678 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:02.937 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.937 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:02.937 08:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:02.937 08:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:03.194 08:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.572 08:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:04.831 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.831 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:04.831 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.831 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:05.091 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.091 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:05.091 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.091 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:05.350 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.350 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:05.350 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.350 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:05.609 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.609 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:05.609 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:05.609 08:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:05.868 08:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:07.247 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.506 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.506 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:07.506 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.506 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:07.765 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.765 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:07.765 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:07.765 08:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.028 08:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.028 08:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:08.028 08:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.028 08:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.301 08:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.301 08:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:08.301 08:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:08.301 08:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:08.570 08:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:09.556 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:09.556 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:09.556 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.556 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:09.820 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:09.820 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:09.820 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.820 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.092 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.092 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.092 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.092 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:10.376 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.376 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:10.377 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.377 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:10.377 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.377 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:10.377 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.377 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:10.652 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.652 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:10.652 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.652 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:10.927 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.927 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:10.927 08:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:10.927 08:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:11.239 08:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:12.255 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:12.255 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:12.255 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.255 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:12.514 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:12.514 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:12.514 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.514 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:12.514 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.514 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:12.514 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.514 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:12.774 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.774 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:12.774 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.774 08:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:13.041 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.041 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:13.041 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.041 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:13.299 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.299 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:13.299 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.299 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:13.558 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.558 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:13.558 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:13.558 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:13.816 08:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:14.075 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:15.012 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:15.013 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:15.013 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.013 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:15.272 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.272 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:15.272 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.272 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:15.531 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.531 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:15.531 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.531 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:15.791 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.791 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:15.791 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.791 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:15.791 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.791 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:15.791 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.791 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:16.050 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.050 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:16.050 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.050 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:16.309 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.309 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:16.309 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:16.568 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:16.827 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:17.765 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:17.765 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:17.765 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.765 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.024 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.024 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:18.024 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.024 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.024 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.024 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.024 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.024 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.283 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.283 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.283 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.283 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.543 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.543 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:18.543 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.543 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:18.802 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.802 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:18.802 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.802 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:19.062 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.062 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:19.062 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:19.062 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:19.320 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:20.257 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:20.257 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:20.257 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.257 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:20.515 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.516 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:20.516 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.516 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:20.775 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.775 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:20.775 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.775 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.035 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.035 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.035 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.035 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.294 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.294 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:21.294 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.294 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.294 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.294 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:21.294 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.294 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:21.553 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.553 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:21.553 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:21.813 08:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:22.071 08:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:23.005 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:23.005 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:23.005 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.005 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:23.262 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.262 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:23.262 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.262 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:23.521 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:23.521 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:23.521 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.521 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:23.521 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.521 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:23.521 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.521 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:23.780 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.780 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:23.780 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.780 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.038 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.038 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:24.038 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.038 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1470388 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1470388 ']' 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1470388 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1470388 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1470388' 00:25:24.297 killing process with pid 1470388 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1470388 00:25:24.297 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1470388 00:25:24.297 { 00:25:24.297 "results": [ 00:25:24.297 { 00:25:24.297 "job": "Nvme0n1", 00:25:24.297 "core_mask": "0x4", 00:25:24.297 "workload": "verify", 00:25:24.297 "status": "terminated", 00:25:24.297 "verify_range": { 00:25:24.297 "start": 0, 00:25:24.297 "length": 16384 00:25:24.297 }, 00:25:24.297 "queue_depth": 128, 00:25:24.297 "io_size": 4096, 00:25:24.297 "runtime": 28.629251, 00:25:24.297 "iops": 10212.457182341235, 00:25:24.297 "mibps": 39.89241086852045, 00:25:24.297 "io_failed": 0, 00:25:24.297 "io_timeout": 0, 00:25:24.297 "avg_latency_us": 12512.494154567727, 00:25:24.297 "min_latency_us": 566.3165217391304, 00:25:24.297 "max_latency_us": 3078254.4139130437 00:25:24.297 } 00:25:24.297 ], 00:25:24.297 "core_count": 1 00:25:24.297 } 00:25:24.578 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1470388 00:25:24.578 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:24.578 [2024-11-28 08:23:36.344977] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:25:24.578 [2024-11-28 08:23:36.345032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470388 ] 00:25:24.578 [2024-11-28 08:23:36.404247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.578 [2024-11-28 08:23:36.445298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.578 Running I/O for 90 seconds... 00:25:24.578 11083.00 IOPS, 43.29 MiB/s [2024-11-28T07:24:06.847Z] 11135.50 IOPS, 43.50 MiB/s [2024-11-28T07:24:06.847Z] 11082.00 IOPS, 43.29 MiB/s [2024-11-28T07:24:06.847Z] 11127.25 IOPS, 43.47 MiB/s [2024-11-28T07:24:06.847Z] 11135.20 IOPS, 43.50 MiB/s [2024-11-28T07:24:06.847Z] 11124.67 IOPS, 43.46 MiB/s [2024-11-28T07:24:06.847Z] 11115.14 IOPS, 43.42 MiB/s [2024-11-28T07:24:06.847Z] 11097.00 IOPS, 43.35 MiB/s [2024-11-28T07:24:06.847Z] 11082.56 IOPS, 43.29 MiB/s [2024-11-28T07:24:06.847Z] 11057.40 IOPS, 43.19 MiB/s [2024-11-28T07:24:06.847Z] 11052.27 IOPS, 43.17 MiB/s [2024-11-28T07:24:06.847Z] 11042.75 IOPS, 43.14 MiB/s [2024-11-28T07:24:06.847Z] [2024-11-28 08:23:50.502369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.578 [2024-11-28 08:23:50.502406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.578 [2024-11-28 08:23:50.502432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.502971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.502986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.579 [2024-11-28 08:23:50.503488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.579 [2024-11-28 08:23:50.503505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.503973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.503990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.504979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.504997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.505008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.505025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.505037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.505055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.505066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.505083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.505096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.505117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.580 [2024-11-28 08:23:50.505128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.505145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.580 [2024-11-28 08:23:50.505157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.505174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.505186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.505202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.505214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.505231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.505244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.580 [2024-11-28 08:23:50.505261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.580 [2024-11-28 08:23:50.505272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.505980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.505997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.506008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.506037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.506065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.506095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.581 [2024-11-28 08:23:50.506124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.581 [2024-11-28 08:23:50.506152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.581 [2024-11-28 08:23:50.506186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.581 [2024-11-28 08:23:50.506216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.581 [2024-11-28 08:23:50.506244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.581 [2024-11-28 08:23:50.506273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.581 [2024-11-28 08:23:50.506301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.581 [2024-11-28 08:23:50.506329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.581 [2024-11-28 08:23:50.506358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.581 [2024-11-28 08:23:50.506375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.582 [2024-11-28 08:23:50.506386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.582 [2024-11-28 08:23:50.506414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.582 [2024-11-28 08:23:50.506443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.582 [2024-11-28 08:23:50.506472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.582 [2024-11-28 08:23:50.506501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.582 [2024-11-28 08:23:50.506532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.582 [2024-11-28 08:23:50.506562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.582 [2024-11-28 08:23:50.506591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.506620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.506649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.506678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.506707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.506724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.506736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.507970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.507987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.508000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.508017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.508029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.508046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.508058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.508076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.582 [2024-11-28 08:23:50.508087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.582 [2024-11-28 08:23:50.508105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.508972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.508989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.509001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.509018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.509031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.583 [2024-11-28 08:23:50.509048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.583 [2024-11-28 08:23:50.509060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.509077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.509088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.509105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.509117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.509134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.509146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.509163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.509176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.509193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.509204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.509222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.509233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.509873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.509895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.509917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.509929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.509953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.509970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.509987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.509998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.584 [2024-11-28 08:23:50.510142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.584 [2024-11-28 08:23:50.510171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:24.584 [2024-11-28 08:23:50.510816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.584 [2024-11-28 08:23:50.510827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.510844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.510855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.510872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.510883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.510900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.510911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.510927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.510940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.510961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.510973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.510990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.511001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.511036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.511064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.511093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.511121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.585 [2024-11-28 08:23:50.511582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.511610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.511638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.511668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.511686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.511697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.512312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.512335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.512357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.512367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.512385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.512395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.512414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.512425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.512443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.512454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.512471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.512482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.512499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.512510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.512527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.585 [2024-11-28 08:23:50.512538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.585 [2024-11-28 08:23:50.512554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.512977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.512995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.586 [2024-11-28 08:23:50.513652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.586 [2024-11-28 08:23:50.513663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.513690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.513717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.513746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.513774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.513803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.513830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.513858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.513886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.513914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.513943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.513977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.513996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.514024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.514053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.514081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.514112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.514141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.514817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.514859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.514887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.514917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.514946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.514981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.514993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.515011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.515023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.515040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.515052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.515069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.515081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.515097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.587 [2024-11-28 08:23:50.515110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.515131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.587 [2024-11-28 08:23:50.515143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.515160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.515172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.515188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.515200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.515217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.515228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.515246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.515258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.515275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.515286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:24.587 [2024-11-28 08:23:50.515303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.587 [2024-11-28 08:23:50.515315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.515986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.515997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.516028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.516056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.516085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.588 [2024-11-28 08:23:50.516113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.588 [2024-11-28 08:23:50.516141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.588 [2024-11-28 08:23:50.516170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.588 [2024-11-28 08:23:50.516202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.588 [2024-11-28 08:23:50.516230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.588 [2024-11-28 08:23:50.516259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.588 [2024-11-28 08:23:50.516287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.588 [2024-11-28 08:23:50.516317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.588 [2024-11-28 08:23:50.516348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.588 [2024-11-28 08:23:50.516376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.588 [2024-11-28 08:23:50.516404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.588 [2024-11-28 08:23:50.516434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.588 [2024-11-28 08:23:50.516451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.589 [2024-11-28 08:23:50.516462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.516480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.589 [2024-11-28 08:23:50.516492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.519894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.589 [2024-11-28 08:23:50.519914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.519932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.589 [2024-11-28 08:23:50.519944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.519969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.589 [2024-11-28 08:23:50.519981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.519999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.520987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.520998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.589 [2024-11-28 08:23:50.521477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.589 [2024-11-28 08:23:50.521494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.521988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.521999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.522494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.522505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.523145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.590 [2024-11-28 08:23:50.523165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.590 [2024-11-28 08:23:50.523186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.591 [2024-11-28 08:23:50.523454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.591 [2024-11-28 08:23:50.523483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.523978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.523995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.524006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.524028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.524040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.524057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.524068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.524085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.524096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.524114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.524125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.524142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.524153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.524170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.524181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.524199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.524210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.524227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.524239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.524256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.524267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.591 [2024-11-28 08:23:50.524284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.591 [2024-11-28 08:23:50.524295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.524324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.524352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.524383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.524412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.524440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.592 [2024-11-28 08:23:50.524901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.524929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.524952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.524965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.525607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.525626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.525647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.525658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.525677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.525687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.525704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.525718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.525736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.525746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.525765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.525775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.525792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.525803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.525821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.525831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.525849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.592 [2024-11-28 08:23:50.525859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.592 [2024-11-28 08:23:50.525877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.525888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.525905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.525915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.525933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.525944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.525970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.525982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.525999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.593 [2024-11-28 08:23:50.526917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.593 [2024-11-28 08:23:50.526928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.526945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.526963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.526980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.526992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.527441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.527453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.594 [2024-11-28 08:23:50.528439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.594 [2024-11-28 08:23:50.528467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:24.594 [2024-11-28 08:23:50.528632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.594 [2024-11-28 08:23:50.528644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.528984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.528994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.595 [2024-11-28 08:23:50.529416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.595 [2024-11-28 08:23:50.529444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.595 [2024-11-28 08:23:50.529473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.595 [2024-11-28 08:23:50.529502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.595 [2024-11-28 08:23:50.529531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.595 [2024-11-28 08:23:50.529562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.595 [2024-11-28 08:23:50.529591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.595 [2024-11-28 08:23:50.529618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.595 [2024-11-28 08:23:50.529641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.595 [2024-11-28 08:23:50.529665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.595 [2024-11-28 08:23:50.529695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.595 [2024-11-28 08:23:50.529713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.596 [2024-11-28 08:23:50.529724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.529742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.596 [2024-11-28 08:23:50.529753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.529771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.596 [2024-11-28 08:23:50.529782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.529800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.596 [2024-11-28 08:23:50.529810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.529829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.596 [2024-11-28 08:23:50.529839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.529858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.596 [2024-11-28 08:23:50.529869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.529890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.529901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.530548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.530568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.530589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.530600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.530617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.530629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.534978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.534995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.596 [2024-11-28 08:23:50.535381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.596 [2024-11-28 08:23:50.535393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.535977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.535995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.536023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.536053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.536080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.536110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.536139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.536166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.536196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.536222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.536255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.536283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.536311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.597 [2024-11-28 08:23:50.536322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.597 [2024-11-28 08:23:50.537001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.598 [2024-11-28 08:23:50.537374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.598 [2024-11-28 08:23:50.537403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.537975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.537986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.538002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.538014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.538033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.538044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:24.598 [2024-11-28 08:23:50.538061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.598 [2024-11-28 08:23:50.538072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.538101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.538130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.538157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.538188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.538215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.538244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.538273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.538300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.538332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.538359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.538808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.599 [2024-11-28 08:23:50.538819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.599 [2024-11-28 08:23:50.539765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.599 [2024-11-28 08:23:50.539782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.539793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.539809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.539820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.539839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.539849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.539866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.539877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.539894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.539907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.539924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.539935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.539959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.539972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.539993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.600 [2024-11-28 08:23:50.540893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.600 [2024-11-28 08:23:50.540910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.540920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.540939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.540957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.540975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.540985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.541969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.541982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.601 [2024-11-28 08:23:50.542333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.601 [2024-11-28 08:23:50.542362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.601 [2024-11-28 08:23:50.542544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:24.601 [2024-11-28 08:23:50.542560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.542976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.542987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.602 [2024-11-28 08:23:50.543342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.602 [2024-11-28 08:23:50.543693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.602 [2024-11-28 08:23:50.543710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.603 [2024-11-28 08:23:50.543720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.543737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.603 [2024-11-28 08:23:50.543750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.543768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.603 [2024-11-28 08:23:50.543779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.543796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.603 [2024-11-28 08:23:50.543808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.543826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.543837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.547586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.547607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.547624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.547637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.547655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.547667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.547684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.547696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.547714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.547726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.547743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.547753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.548974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.548986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.549003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.549015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.549036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.549047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.549063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.549075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.549093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.549104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.549120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.549130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.549148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.549159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.549176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.549186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.549203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.549214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.549232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.549243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.549259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.603 [2024-11-28 08:23:50.549270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.603 [2024-11-28 08:23:50.549288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.549977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.549987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.604 [2024-11-28 08:23:50.550932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.604 [2024-11-28 08:23:50.550943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.550967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.550979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-11-28 08:23:50.551069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-11-28 08:23:50.551097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.551979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.551990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.552008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.552019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.552035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.605 [2024-11-28 08:23:50.552061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.605 [2024-11-28 08:23:50.552080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-11-28 08:23:50.552091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-11-28 08:23:50.552524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.552554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.552584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.552613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.552642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.552670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.552687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.552698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.606 [2024-11-28 08:23:50.553750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.606 [2024-11-28 08:23:50.553761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.553778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.553791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.553809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.553822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.553839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.553849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.553867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.553879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.553896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.553906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.553924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.553935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.553960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.553971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.553987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.607 [2024-11-28 08:23:50.554892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.607 [2024-11-28 08:23:50.554902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.554919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.554931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.554956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.554967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.554985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.554996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.555985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.555996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.608 [2024-11-28 08:23:50.556083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.608 [2024-11-28 08:23:50.556113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:24.608 [2024-11-28 08:23:50.556630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.608 [2024-11-28 08:23:50.556641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.556670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.556700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.556729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.556758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.556787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.556815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.556847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.556875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.556904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.556936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.556972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.556990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.557002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.557030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.557061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.557089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.609 [2024-11-28 08:23:50.557553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.557582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.557612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.557642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.557672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.557689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.557699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.558391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.558411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.558431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.609 [2024-11-28 08:23:50.558442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.609 [2024-11-28 08:23:50.558459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.558972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.558991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.610 [2024-11-28 08:23:50.559466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.610 [2024-11-28 08:23:50.559483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.559494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.559511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.559521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.559539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.563974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.563987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.611 [2024-11-28 08:23:50.564470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.611 [2024-11-28 08:23:50.564503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:24.611 [2024-11-28 08:23:50.564556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.611 [2024-11-28 08:23:50.564567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.564976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.564988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.612 [2024-11-28 08:23:50.565570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.612 [2024-11-28 08:23:50.565604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.612 [2024-11-28 08:23:50.565639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.612 [2024-11-28 08:23:50.565672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.612 [2024-11-28 08:23:50.565704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.612 [2024-11-28 08:23:50.565736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.612 [2024-11-28 08:23:50.565755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.612 [2024-11-28 08:23:50.565767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.565789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.613 [2024-11-28 08:23:50.565800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.565821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.613 [2024-11-28 08:23:50.565832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.565853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.613 [2024-11-28 08:23:50.565865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.565886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.613 [2024-11-28 08:23:50.565896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.565916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.613 [2024-11-28 08:23:50.565929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.565954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.613 [2024-11-28 08:23:50.565965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.565986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.613 [2024-11-28 08:23:50.565998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.566022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.613 [2024-11-28 08:23:50.566033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.566053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.613 [2024-11-28 08:23:50.566065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.566085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.613 [2024-11-28 08:23:50.566096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.566117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:23:50.566128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.566150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:23:50.566160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.566180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:23:50.566191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.566212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:23:50.566224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:23:50.566383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:23:50.566396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.613 10773.31 IOPS, 42.08 MiB/s [2024-11-28T07:24:06.882Z] 10003.79 IOPS, 39.08 MiB/s [2024-11-28T07:24:06.882Z] 9336.87 IOPS, 36.47 MiB/s [2024-11-28T07:24:06.882Z] 8918.19 IOPS, 34.84 MiB/s [2024-11-28T07:24:06.882Z] 9040.12 IOPS, 35.31 MiB/s [2024-11-28T07:24:06.882Z] 9141.61 IOPS, 35.71 MiB/s [2024-11-28T07:24:06.882Z] 9330.21 IOPS, 36.45 MiB/s [2024-11-28T07:24:06.882Z] 9517.35 IOPS, 37.18 MiB/s [2024-11-28T07:24:06.882Z] 9687.76 IOPS, 37.84 MiB/s [2024-11-28T07:24:06.882Z] 9747.23 IOPS, 38.08 MiB/s [2024-11-28T07:24:06.882Z] 9805.83 IOPS, 38.30 MiB/s [2024-11-28T07:24:06.882Z] 9878.08 IOPS, 38.59 MiB/s [2024-11-28T07:24:06.882Z] 10002.12 IOPS, 39.07 MiB/s [2024-11-28T07:24:06.882Z] 10117.69 IOPS, 39.52 MiB/s [2024-11-28T07:24:06.882Z] [2024-11-28 08:24:04.144681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.144723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.144763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.144774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.144790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.144800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.144821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.144831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.144848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.144858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.144874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.144883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.144900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.144911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.144928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.144939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.144961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.144972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.144988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.144999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.613 [2024-11-28 08:24:04.145027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.145053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.145078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.145103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.145128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.145156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.145184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.145213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.145243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.145272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.613 [2024-11-28 08:24:04.145302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:24.613 [2024-11-28 08:24:04.145320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.145757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.145768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.614 [2024-11-28 08:24:04.147720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.614 [2024-11-28 08:24:04.147749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.614 [2024-11-28 08:24:04.147777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.614 [2024-11-28 08:24:04.147809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.614 [2024-11-28 08:24:04.147838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.614 [2024-11-28 08:24:04.147866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.614 [2024-11-28 08:24:04.147897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.614 [2024-11-28 08:24:04.147926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.614 [2024-11-28 08:24:04.147962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.147980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.614 [2024-11-28 08:24:04.147991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.614 [2024-11-28 08:24:04.148009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.614 [2024-11-28 08:24:04.148021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:24.615 [2024-11-28 08:24:04.148669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.615 [2024-11-28 08:24:04.148680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:24.615 10175.89 IOPS, 39.75 MiB/s [2024-11-28T07:24:06.884Z] 10197.54 IOPS, 39.83 MiB/s [2024-11-28T07:24:06.884Z] Received shutdown signal, test time was about 28.629920 seconds 00:25:24.615 00:25:24.615 Latency(us) 00:25:24.615 [2024-11-28T07:24:06.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.615 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:24.615 Verification LBA range: start 0x0 length 0x4000 00:25:24.615 Nvme0n1 : 28.63 10212.46 39.89 0.00 0.00 12512.49 566.32 3078254.41 00:25:24.615 [2024-11-28T07:24:06.884Z] =================================================================================================================== 00:25:24.615 [2024-11-28T07:24:06.884Z] Total : 10212.46 39.89 0.00 0.00 12512.49 566.32 3078254.41 00:25:24.615 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:24.874 rmmod nvme_tcp 00:25:24.874 rmmod nvme_fabrics 00:25:24.874 rmmod nvme_keyring 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1470123 ']' 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1470123 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1470123 ']' 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1470123 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1470123 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1470123' 00:25:24.874 killing process with pid 1470123 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1470123 00:25:24.874 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1470123 00:25:24.874 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:24.874 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:24.874 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:24.874 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:24.874 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:24.874 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:24.874 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:25.133 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:25.133 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:25.133 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.133 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.133 08:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.039 08:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:27.039 00:25:27.039 real 0m39.971s 00:25:27.039 user 1m49.317s 00:25:27.039 sys 0m11.176s 00:25:27.039 08:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.039 08:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:27.039 ************************************ 00:25:27.039 END TEST nvmf_host_multipath_status 00:25:27.039 ************************************ 00:25:27.039 08:24:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:27.039 08:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:27.039 08:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.039 08:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.039 ************************************ 00:25:27.039 START TEST nvmf_discovery_remove_ifc 00:25:27.039 ************************************ 00:25:27.039 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:27.299 * Looking for test storage... 00:25:27.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:27.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.299 --rc genhtml_branch_coverage=1 00:25:27.299 --rc genhtml_function_coverage=1 00:25:27.299 --rc genhtml_legend=1 00:25:27.299 --rc geninfo_all_blocks=1 00:25:27.299 --rc geninfo_unexecuted_blocks=1 00:25:27.299 00:25:27.299 ' 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:27.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.299 --rc genhtml_branch_coverage=1 00:25:27.299 --rc genhtml_function_coverage=1 00:25:27.299 --rc genhtml_legend=1 00:25:27.299 --rc geninfo_all_blocks=1 00:25:27.299 --rc geninfo_unexecuted_blocks=1 00:25:27.299 00:25:27.299 ' 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:27.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.299 --rc genhtml_branch_coverage=1 00:25:27.299 --rc genhtml_function_coverage=1 00:25:27.299 --rc genhtml_legend=1 00:25:27.299 --rc geninfo_all_blocks=1 00:25:27.299 --rc geninfo_unexecuted_blocks=1 00:25:27.299 00:25:27.299 ' 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:27.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.299 --rc genhtml_branch_coverage=1 00:25:27.299 --rc genhtml_function_coverage=1 00:25:27.299 --rc genhtml_legend=1 00:25:27.299 --rc geninfo_all_blocks=1 00:25:27.299 --rc geninfo_unexecuted_blocks=1 00:25:27.299 00:25:27.299 ' 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.299 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:27.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:27.300 08:24:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:32.567 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:32.567 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:32.567 Found net devices under 0000:86:00.0: cvl_0_0 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:32.567 Found net devices under 0000:86:00.1: cvl_0_1 00:25:32.567 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:32.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:25:32.568 00:25:32.568 --- 10.0.0.2 ping statistics --- 00:25:32.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.568 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:25:32.568 00:25:32.568 --- 10.0.0.1 ping statistics --- 00:25:32.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.568 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1479570 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1479570 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1479570 ']' 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.568 08:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.827 [2024-11-28 08:24:14.877321] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:25:32.827 [2024-11-28 08:24:14.877369] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.827 [2024-11-28 08:24:14.941435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.827 [2024-11-28 08:24:14.982659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.827 [2024-11-28 08:24:14.982696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.827 [2024-11-28 08:24:14.982704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.827 [2024-11-28 08:24:14.982710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.827 [2024-11-28 08:24:14.982715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.827 [2024-11-28 08:24:14.983333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.827 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.827 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:32.827 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:32.827 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.827 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.086 [2024-11-28 08:24:15.122875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.086 [2024-11-28 08:24:15.131053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:33.086 null0 00:25:33.086 [2024-11-28 08:24:15.163039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1479671 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1479671 /tmp/host.sock 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1479671 ']' 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:33.086 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.086 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.086 [2024-11-28 08:24:15.233211] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:25:33.086 [2024-11-28 08:24:15.233254] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479671 ] 00:25:33.086 [2024-11-28 08:24:15.295034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.086 [2024-11-28 08:24:15.338025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.345 08:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.281 [2024-11-28 08:24:16.529468] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:34.281 [2024-11-28 08:24:16.529487] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:34.281 [2024-11-28 08:24:16.529502] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.540 [2024-11-28 08:24:16.655903] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:34.540 [2024-11-28 08:24:16.750613] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:34.540 [2024-11-28 08:24:16.751406] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c48a50:1 started. 00:25:34.540 [2024-11-28 08:24:16.752747] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:34.540 [2024-11-28 08:24:16.752788] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:34.540 [2024-11-28 08:24:16.752807] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:34.540 [2024-11-28 08:24:16.752819] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:34.540 [2024-11-28 08:24:16.752836] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.540 [2024-11-28 08:24:16.758384] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c48a50 was disconnected and freed. delete nvme_qpair. 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:34.540 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:34.799 08:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:35.734 08:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:35.734 08:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.734 08:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:35.734 08:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.734 08:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:35.734 08:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:35.734 08:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:35.734 08:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.734 08:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:35.734 08:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:37.114 08:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:37.114 08:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.114 08:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:37.114 08:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:37.114 08:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.114 08:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.114 08:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:37.114 08:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.114 08:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:37.114 08:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:38.051 08:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:38.051 08:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:38.051 08:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.051 08:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:38.051 08:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:38.051 08:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.051 08:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.051 08:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.051 08:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:38.051 08:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:38.988 08:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:38.988 08:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:38.988 08:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.988 08:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:38.988 08:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.988 08:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:38.988 08:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.988 08:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.988 08:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:38.988 08:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:39.924 08:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:39.924 08:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.924 08:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:39.924 08:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.924 08:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:39.924 08:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:39.924 08:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:39.924 08:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.183 [2024-11-28 08:24:22.194329] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:40.183 [2024-11-28 08:24:22.194369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.183 [2024-11-28 08:24:22.194381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.183 [2024-11-28 08:24:22.194391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.183 [2024-11-28 08:24:22.194400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.183 [2024-11-28 08:24:22.194409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.183 [2024-11-28 08:24:22.194418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.183 [2024-11-28 08:24:22.194428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.183 [2024-11-28 08:24:22.194435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.183 [2024-11-28 08:24:22.194444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.183 [2024-11-28 08:24:22.194452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.183 [2024-11-28 08:24:22.194461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c25240 is same with the state(6) to be set 00:25:40.183 08:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:40.183 08:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:40.183 [2024-11-28 08:24:22.204352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c25240 (9): Bad file descriptor 00:25:40.183 [2024-11-28 08:24:22.214385] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:40.183 [2024-11-28 08:24:22.214395] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:40.183 [2024-11-28 08:24:22.214400] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:40.183 [2024-11-28 08:24:22.214404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:40.183 [2024-11-28 08:24:22.214426] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.119 08:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.119 08:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.119 08:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.119 08:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.119 08:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.120 08:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.120 08:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.120 [2024-11-28 08:24:23.279979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:41.120 [2024-11-28 08:24:23.280024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c25240 with addr=10.0.0.2, port=4420 00:25:41.120 [2024-11-28 08:24:23.280040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c25240 is same with the state(6) to be set 00:25:41.120 [2024-11-28 08:24:23.280070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c25240 (9): Bad file descriptor 00:25:41.120 [2024-11-28 08:24:23.280484] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:41.120 [2024-11-28 08:24:23.280513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.120 [2024-11-28 08:24:23.280523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.120 [2024-11-28 08:24:23.280534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.120 [2024-11-28 08:24:23.280544] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.120 [2024-11-28 08:24:23.280551] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.120 [2024-11-28 08:24:23.280558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.120 [2024-11-28 08:24:23.280568] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.120 [2024-11-28 08:24:23.280574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.120 08:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.120 08:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:41.120 08:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:42.057 [2024-11-28 08:24:24.283053] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.057 [2024-11-28 08:24:24.283075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.057 [2024-11-28 08:24:24.283086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.057 [2024-11-28 08:24:24.283093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.057 [2024-11-28 08:24:24.283100] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:42.057 [2024-11-28 08:24:24.283106] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.057 [2024-11-28 08:24:24.283111] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.057 [2024-11-28 08:24:24.283116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.057 [2024-11-28 08:24:24.283135] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:42.057 [2024-11-28 08:24:24.283156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.057 [2024-11-28 08:24:24.283165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.057 [2024-11-28 08:24:24.283175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.057 [2024-11-28 08:24:24.283187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.057 [2024-11-28 08:24:24.283194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.057 [2024-11-28 08:24:24.283201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.057 [2024-11-28 08:24:24.283208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.057 [2024-11-28 08:24:24.283215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.057 [2024-11-28 08:24:24.283222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.057 [2024-11-28 08:24:24.283228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.057 [2024-11-28 08:24:24.283234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:42.057 [2024-11-28 08:24:24.283375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c14910 (9): Bad file descriptor 00:25:42.057 [2024-11-28 08:24:24.284385] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:42.057 [2024-11-28 08:24:24.284395] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:42.057 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.057 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.057 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.057 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.057 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.057 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.057 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.057 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:42.317 08:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:43.254 08:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:43.254 08:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.254 08:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:43.254 08:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:43.254 08:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.254 08:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:43.254 08:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.254 08:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.512 08:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:43.512 08:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:44.079 [2024-11-28 08:24:26.335099] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:44.079 [2024-11-28 08:24:26.335116] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:44.079 [2024-11-28 08:24:26.335131] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.338 [2024-11-28 08:24:26.422404] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:44.338 [2024-11-28 08:24:26.525090] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:44.338 [2024-11-28 08:24:26.525708] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1c524a0:1 started. 00:25:44.338 [2024-11-28 08:24:26.526721] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:44.338 [2024-11-28 08:24:26.526750] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:44.338 [2024-11-28 08:24:26.526767] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:44.338 [2024-11-28 08:24:26.526779] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:44.338 [2024-11-28 08:24:26.526785] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.338 [2024-11-28 08:24:26.533636] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1c524a0 was disconnected and freed. delete nvme_qpair. 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1479671 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1479671 ']' 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1479671 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.338 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479671 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479671' 00:25:44.596 killing process with pid 1479671 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1479671 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1479671 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.596 rmmod nvme_tcp 00:25:44.596 rmmod nvme_fabrics 00:25:44.596 rmmod nvme_keyring 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1479570 ']' 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1479570 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1479570 ']' 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1479570 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.596 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479570 00:25:44.855 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:44.855 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:44.855 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479570' 00:25:44.855 killing process with pid 1479570 00:25:44.855 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1479570 00:25:44.855 08:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1479570 00:25:44.855 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:44.855 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:44.855 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:44.855 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:44.855 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:44.856 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:44.856 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:44.856 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.856 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.856 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.856 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.856 08:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:47.390 00:25:47.390 real 0m19.858s 00:25:47.390 user 0m24.492s 00:25:47.390 sys 0m5.374s 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.390 ************************************ 00:25:47.390 END TEST nvmf_discovery_remove_ifc 00:25:47.390 ************************************ 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.390 ************************************ 00:25:47.390 START TEST nvmf_identify_kernel_target 00:25:47.390 ************************************ 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:47.390 * Looking for test storage... 00:25:47.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.390 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.391 --rc genhtml_branch_coverage=1 00:25:47.391 --rc genhtml_function_coverage=1 00:25:47.391 --rc genhtml_legend=1 00:25:47.391 --rc geninfo_all_blocks=1 00:25:47.391 --rc geninfo_unexecuted_blocks=1 00:25:47.391 00:25:47.391 ' 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.391 --rc genhtml_branch_coverage=1 00:25:47.391 --rc genhtml_function_coverage=1 00:25:47.391 --rc genhtml_legend=1 00:25:47.391 --rc geninfo_all_blocks=1 00:25:47.391 --rc geninfo_unexecuted_blocks=1 00:25:47.391 00:25:47.391 ' 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.391 --rc genhtml_branch_coverage=1 00:25:47.391 --rc genhtml_function_coverage=1 00:25:47.391 --rc genhtml_legend=1 00:25:47.391 --rc geninfo_all_blocks=1 00:25:47.391 --rc geninfo_unexecuted_blocks=1 00:25:47.391 00:25:47.391 ' 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.391 --rc genhtml_branch_coverage=1 00:25:47.391 --rc genhtml_function_coverage=1 00:25:47.391 --rc genhtml_legend=1 00:25:47.391 --rc geninfo_all_blocks=1 00:25:47.391 --rc geninfo_unexecuted_blocks=1 00:25:47.391 00:25:47.391 ' 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.392 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.392 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.392 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:47.392 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:47.392 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.392 08:24:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:52.664 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:52.664 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:52.664 Found net devices under 0000:86:00.0: cvl_0_0 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:52.664 Found net devices under 0000:86:00.1: cvl_0_1 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:52.664 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:52.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:25:52.665 00:25:52.665 --- 10.0.0.2 ping statistics --- 00:25:52.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.665 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:25:52.665 00:25:52.665 --- 10.0.0.1 ping statistics --- 00:25:52.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.665 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:52.665 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:52.924 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:52.924 08:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:54.826 Waiting for block devices as requested 00:25:55.084 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:55.084 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:55.084 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:55.343 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:55.343 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:55.343 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:55.343 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:55.602 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:55.602 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:55.602 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:55.861 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:55.861 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:55.861 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:55.862 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:56.126 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:56.126 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:56.126 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:56.385 No valid GPT data, bailing 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:56.385 00:25:56.385 Discovery Log Number of Records 2, Generation counter 2 00:25:56.385 =====Discovery Log Entry 0====== 00:25:56.385 trtype: tcp 00:25:56.385 adrfam: ipv4 00:25:56.385 subtype: current discovery subsystem 00:25:56.385 treq: not specified, sq flow control disable supported 00:25:56.385 portid: 1 00:25:56.385 trsvcid: 4420 00:25:56.385 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:56.385 traddr: 10.0.0.1 00:25:56.385 eflags: none 00:25:56.385 sectype: none 00:25:56.385 =====Discovery Log Entry 1====== 00:25:56.385 trtype: tcp 00:25:56.385 adrfam: ipv4 00:25:56.385 subtype: nvme subsystem 00:25:56.385 treq: not specified, sq flow control disable supported 00:25:56.385 portid: 1 00:25:56.385 trsvcid: 4420 00:25:56.385 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:56.385 traddr: 10.0.0.1 00:25:56.385 eflags: none 00:25:56.385 sectype: none 00:25:56.385 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:56.385 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:56.645 ===================================================== 00:25:56.645 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:56.645 ===================================================== 00:25:56.645 Controller Capabilities/Features 00:25:56.645 ================================ 00:25:56.645 Vendor ID: 0000 00:25:56.645 Subsystem Vendor ID: 0000 00:25:56.645 Serial Number: ee9af47a47f66349f04f 00:25:56.645 Model Number: Linux 00:25:56.645 Firmware Version: 6.8.9-20 00:25:56.645 Recommended Arb Burst: 0 00:25:56.645 IEEE OUI Identifier: 00 00 00 00:25:56.645 Multi-path I/O 00:25:56.645 May have multiple subsystem ports: No 00:25:56.645 May have multiple controllers: No 00:25:56.645 Associated with SR-IOV VF: No 00:25:56.645 Max Data Transfer Size: Unlimited 00:25:56.645 Max Number of Namespaces: 0 00:25:56.645 Max Number of I/O Queues: 1024 00:25:56.645 NVMe Specification Version (VS): 1.3 00:25:56.645 NVMe Specification Version (Identify): 1.3 00:25:56.645 Maximum Queue Entries: 1024 00:25:56.645 Contiguous Queues Required: No 00:25:56.645 Arbitration Mechanisms Supported 00:25:56.645 Weighted Round Robin: Not Supported 00:25:56.645 Vendor Specific: Not Supported 00:25:56.645 Reset Timeout: 7500 ms 00:25:56.645 Doorbell Stride: 4 bytes 00:25:56.645 NVM Subsystem Reset: Not Supported 00:25:56.645 Command Sets Supported 00:25:56.645 NVM Command Set: Supported 00:25:56.645 Boot Partition: Not Supported 00:25:56.645 Memory Page Size Minimum: 4096 bytes 00:25:56.645 Memory Page Size Maximum: 4096 bytes 00:25:56.645 Persistent Memory Region: Not Supported 00:25:56.645 Optional Asynchronous Events Supported 00:25:56.645 Namespace Attribute Notices: Not Supported 00:25:56.645 Firmware Activation Notices: Not Supported 00:25:56.645 ANA Change Notices: Not Supported 00:25:56.645 PLE Aggregate Log Change Notices: Not Supported 00:25:56.645 LBA Status Info Alert Notices: Not Supported 00:25:56.645 EGE Aggregate Log Change Notices: Not Supported 00:25:56.645 Normal NVM Subsystem Shutdown event: Not Supported 00:25:56.645 Zone Descriptor Change Notices: Not Supported 00:25:56.645 Discovery Log Change Notices: Supported 00:25:56.645 Controller Attributes 00:25:56.645 128-bit Host Identifier: Not Supported 00:25:56.645 Non-Operational Permissive Mode: Not Supported 00:25:56.645 NVM Sets: Not Supported 00:25:56.645 Read Recovery Levels: Not Supported 00:25:56.645 Endurance Groups: Not Supported 00:25:56.645 Predictable Latency Mode: Not Supported 00:25:56.645 Traffic Based Keep ALive: Not Supported 00:25:56.645 Namespace Granularity: Not Supported 00:25:56.645 SQ Associations: Not Supported 00:25:56.645 UUID List: Not Supported 00:25:56.645 Multi-Domain Subsystem: Not Supported 00:25:56.645 Fixed Capacity Management: Not Supported 00:25:56.645 Variable Capacity Management: Not Supported 00:25:56.645 Delete Endurance Group: Not Supported 00:25:56.645 Delete NVM Set: Not Supported 00:25:56.645 Extended LBA Formats Supported: Not Supported 00:25:56.645 Flexible Data Placement Supported: Not Supported 00:25:56.645 00:25:56.645 Controller Memory Buffer Support 00:25:56.645 ================================ 00:25:56.645 Supported: No 00:25:56.645 00:25:56.645 Persistent Memory Region Support 00:25:56.645 ================================ 00:25:56.645 Supported: No 00:25:56.645 00:25:56.645 Admin Command Set Attributes 00:25:56.645 ============================ 00:25:56.645 Security Send/Receive: Not Supported 00:25:56.645 Format NVM: Not Supported 00:25:56.645 Firmware Activate/Download: Not Supported 00:25:56.645 Namespace Management: Not Supported 00:25:56.645 Device Self-Test: Not Supported 00:25:56.645 Directives: Not Supported 00:25:56.645 NVMe-MI: Not Supported 00:25:56.645 Virtualization Management: Not Supported 00:25:56.645 Doorbell Buffer Config: Not Supported 00:25:56.645 Get LBA Status Capability: Not Supported 00:25:56.645 Command & Feature Lockdown Capability: Not Supported 00:25:56.645 Abort Command Limit: 1 00:25:56.645 Async Event Request Limit: 1 00:25:56.645 Number of Firmware Slots: N/A 00:25:56.645 Firmware Slot 1 Read-Only: N/A 00:25:56.645 Firmware Activation Without Reset: N/A 00:25:56.645 Multiple Update Detection Support: N/A 00:25:56.645 Firmware Update Granularity: No Information Provided 00:25:56.645 Per-Namespace SMART Log: No 00:25:56.645 Asymmetric Namespace Access Log Page: Not Supported 00:25:56.645 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:56.645 Command Effects Log Page: Not Supported 00:25:56.645 Get Log Page Extended Data: Supported 00:25:56.645 Telemetry Log Pages: Not Supported 00:25:56.645 Persistent Event Log Pages: Not Supported 00:25:56.645 Supported Log Pages Log Page: May Support 00:25:56.645 Commands Supported & Effects Log Page: Not Supported 00:25:56.645 Feature Identifiers & Effects Log Page:May Support 00:25:56.645 NVMe-MI Commands & Effects Log Page: May Support 00:25:56.645 Data Area 4 for Telemetry Log: Not Supported 00:25:56.645 Error Log Page Entries Supported: 1 00:25:56.645 Keep Alive: Not Supported 00:25:56.645 00:25:56.645 NVM Command Set Attributes 00:25:56.645 ========================== 00:25:56.645 Submission Queue Entry Size 00:25:56.645 Max: 1 00:25:56.645 Min: 1 00:25:56.645 Completion Queue Entry Size 00:25:56.645 Max: 1 00:25:56.645 Min: 1 00:25:56.645 Number of Namespaces: 0 00:25:56.645 Compare Command: Not Supported 00:25:56.646 Write Uncorrectable Command: Not Supported 00:25:56.646 Dataset Management Command: Not Supported 00:25:56.646 Write Zeroes Command: Not Supported 00:25:56.646 Set Features Save Field: Not Supported 00:25:56.646 Reservations: Not Supported 00:25:56.646 Timestamp: Not Supported 00:25:56.646 Copy: Not Supported 00:25:56.646 Volatile Write Cache: Not Present 00:25:56.646 Atomic Write Unit (Normal): 1 00:25:56.646 Atomic Write Unit (PFail): 1 00:25:56.646 Atomic Compare & Write Unit: 1 00:25:56.646 Fused Compare & Write: Not Supported 00:25:56.646 Scatter-Gather List 00:25:56.646 SGL Command Set: Supported 00:25:56.646 SGL Keyed: Not Supported 00:25:56.646 SGL Bit Bucket Descriptor: Not Supported 00:25:56.646 SGL Metadata Pointer: Not Supported 00:25:56.646 Oversized SGL: Not Supported 00:25:56.646 SGL Metadata Address: Not Supported 00:25:56.646 SGL Offset: Supported 00:25:56.646 Transport SGL Data Block: Not Supported 00:25:56.646 Replay Protected Memory Block: Not Supported 00:25:56.646 00:25:56.646 Firmware Slot Information 00:25:56.646 ========================= 00:25:56.646 Active slot: 0 00:25:56.646 00:25:56.646 00:25:56.646 Error Log 00:25:56.646 ========= 00:25:56.646 00:25:56.646 Active Namespaces 00:25:56.646 ================= 00:25:56.646 Discovery Log Page 00:25:56.646 ================== 00:25:56.646 Generation Counter: 2 00:25:56.646 Number of Records: 2 00:25:56.646 Record Format: 0 00:25:56.646 00:25:56.646 Discovery Log Entry 0 00:25:56.646 ---------------------- 00:25:56.646 Transport Type: 3 (TCP) 00:25:56.646 Address Family: 1 (IPv4) 00:25:56.646 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:56.646 Entry Flags: 00:25:56.646 Duplicate Returned Information: 0 00:25:56.646 Explicit Persistent Connection Support for Discovery: 0 00:25:56.646 Transport Requirements: 00:25:56.646 Secure Channel: Not Specified 00:25:56.646 Port ID: 1 (0x0001) 00:25:56.646 Controller ID: 65535 (0xffff) 00:25:56.646 Admin Max SQ Size: 32 00:25:56.646 Transport Service Identifier: 4420 00:25:56.646 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:56.646 Transport Address: 10.0.0.1 00:25:56.646 Discovery Log Entry 1 00:25:56.646 ---------------------- 00:25:56.646 Transport Type: 3 (TCP) 00:25:56.646 Address Family: 1 (IPv4) 00:25:56.646 Subsystem Type: 2 (NVM Subsystem) 00:25:56.646 Entry Flags: 00:25:56.646 Duplicate Returned Information: 0 00:25:56.646 Explicit Persistent Connection Support for Discovery: 0 00:25:56.646 Transport Requirements: 00:25:56.646 Secure Channel: Not Specified 00:25:56.646 Port ID: 1 (0x0001) 00:25:56.646 Controller ID: 65535 (0xffff) 00:25:56.646 Admin Max SQ Size: 32 00:25:56.646 Transport Service Identifier: 4420 00:25:56.646 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:56.646 Transport Address: 10.0.0.1 00:25:56.646 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:56.646 get_feature(0x01) failed 00:25:56.646 get_feature(0x02) failed 00:25:56.646 get_feature(0x04) failed 00:25:56.646 ===================================================== 00:25:56.646 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:56.646 ===================================================== 00:25:56.646 Controller Capabilities/Features 00:25:56.646 ================================ 00:25:56.646 Vendor ID: 0000 00:25:56.646 Subsystem Vendor ID: 0000 00:25:56.646 Serial Number: 2685b7fa5e8f87ec3822 00:25:56.646 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:56.646 Firmware Version: 6.8.9-20 00:25:56.646 Recommended Arb Burst: 6 00:25:56.646 IEEE OUI Identifier: 00 00 00 00:25:56.646 Multi-path I/O 00:25:56.646 May have multiple subsystem ports: Yes 00:25:56.646 May have multiple controllers: Yes 00:25:56.646 Associated with SR-IOV VF: No 00:25:56.646 Max Data Transfer Size: Unlimited 00:25:56.646 Max Number of Namespaces: 1024 00:25:56.646 Max Number of I/O Queues: 128 00:25:56.646 NVMe Specification Version (VS): 1.3 00:25:56.646 NVMe Specification Version (Identify): 1.3 00:25:56.646 Maximum Queue Entries: 1024 00:25:56.646 Contiguous Queues Required: No 00:25:56.646 Arbitration Mechanisms Supported 00:25:56.646 Weighted Round Robin: Not Supported 00:25:56.646 Vendor Specific: Not Supported 00:25:56.646 Reset Timeout: 7500 ms 00:25:56.646 Doorbell Stride: 4 bytes 00:25:56.646 NVM Subsystem Reset: Not Supported 00:25:56.646 Command Sets Supported 00:25:56.646 NVM Command Set: Supported 00:25:56.646 Boot Partition: Not Supported 00:25:56.646 Memory Page Size Minimum: 4096 bytes 00:25:56.646 Memory Page Size Maximum: 4096 bytes 00:25:56.646 Persistent Memory Region: Not Supported 00:25:56.646 Optional Asynchronous Events Supported 00:25:56.646 Namespace Attribute Notices: Supported 00:25:56.646 Firmware Activation Notices: Not Supported 00:25:56.646 ANA Change Notices: Supported 00:25:56.646 PLE Aggregate Log Change Notices: Not Supported 00:25:56.646 LBA Status Info Alert Notices: Not Supported 00:25:56.646 EGE Aggregate Log Change Notices: Not Supported 00:25:56.646 Normal NVM Subsystem Shutdown event: Not Supported 00:25:56.646 Zone Descriptor Change Notices: Not Supported 00:25:56.646 Discovery Log Change Notices: Not Supported 00:25:56.646 Controller Attributes 00:25:56.646 128-bit Host Identifier: Supported 00:25:56.646 Non-Operational Permissive Mode: Not Supported 00:25:56.646 NVM Sets: Not Supported 00:25:56.646 Read Recovery Levels: Not Supported 00:25:56.646 Endurance Groups: Not Supported 00:25:56.646 Predictable Latency Mode: Not Supported 00:25:56.646 Traffic Based Keep ALive: Supported 00:25:56.646 Namespace Granularity: Not Supported 00:25:56.646 SQ Associations: Not Supported 00:25:56.646 UUID List: Not Supported 00:25:56.646 Multi-Domain Subsystem: Not Supported 00:25:56.646 Fixed Capacity Management: Not Supported 00:25:56.646 Variable Capacity Management: Not Supported 00:25:56.646 Delete Endurance Group: Not Supported 00:25:56.646 Delete NVM Set: Not Supported 00:25:56.646 Extended LBA Formats Supported: Not Supported 00:25:56.646 Flexible Data Placement Supported: Not Supported 00:25:56.646 00:25:56.646 Controller Memory Buffer Support 00:25:56.647 ================================ 00:25:56.647 Supported: No 00:25:56.647 00:25:56.647 Persistent Memory Region Support 00:25:56.647 ================================ 00:25:56.647 Supported: No 00:25:56.647 00:25:56.647 Admin Command Set Attributes 00:25:56.647 ============================ 00:25:56.647 Security Send/Receive: Not Supported 00:25:56.647 Format NVM: Not Supported 00:25:56.647 Firmware Activate/Download: Not Supported 00:25:56.647 Namespace Management: Not Supported 00:25:56.647 Device Self-Test: Not Supported 00:25:56.647 Directives: Not Supported 00:25:56.647 NVMe-MI: Not Supported 00:25:56.647 Virtualization Management: Not Supported 00:25:56.647 Doorbell Buffer Config: Not Supported 00:25:56.647 Get LBA Status Capability: Not Supported 00:25:56.647 Command & Feature Lockdown Capability: Not Supported 00:25:56.647 Abort Command Limit: 4 00:25:56.647 Async Event Request Limit: 4 00:25:56.647 Number of Firmware Slots: N/A 00:25:56.647 Firmware Slot 1 Read-Only: N/A 00:25:56.647 Firmware Activation Without Reset: N/A 00:25:56.647 Multiple Update Detection Support: N/A 00:25:56.647 Firmware Update Granularity: No Information Provided 00:25:56.647 Per-Namespace SMART Log: Yes 00:25:56.647 Asymmetric Namespace Access Log Page: Supported 00:25:56.647 ANA Transition Time : 10 sec 00:25:56.647 00:25:56.647 Asymmetric Namespace Access Capabilities 00:25:56.647 ANA Optimized State : Supported 00:25:56.647 ANA Non-Optimized State : Supported 00:25:56.647 ANA Inaccessible State : Supported 00:25:56.647 ANA Persistent Loss State : Supported 00:25:56.647 ANA Change State : Supported 00:25:56.647 ANAGRPID is not changed : No 00:25:56.647 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:56.647 00:25:56.647 ANA Group Identifier Maximum : 128 00:25:56.647 Number of ANA Group Identifiers : 128 00:25:56.647 Max Number of Allowed Namespaces : 1024 00:25:56.647 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:56.647 Command Effects Log Page: Supported 00:25:56.647 Get Log Page Extended Data: Supported 00:25:56.647 Telemetry Log Pages: Not Supported 00:25:56.647 Persistent Event Log Pages: Not Supported 00:25:56.647 Supported Log Pages Log Page: May Support 00:25:56.647 Commands Supported & Effects Log Page: Not Supported 00:25:56.647 Feature Identifiers & Effects Log Page:May Support 00:25:56.647 NVMe-MI Commands & Effects Log Page: May Support 00:25:56.647 Data Area 4 for Telemetry Log: Not Supported 00:25:56.647 Error Log Page Entries Supported: 128 00:25:56.647 Keep Alive: Supported 00:25:56.647 Keep Alive Granularity: 1000 ms 00:25:56.647 00:25:56.647 NVM Command Set Attributes 00:25:56.647 ========================== 00:25:56.647 Submission Queue Entry Size 00:25:56.647 Max: 64 00:25:56.647 Min: 64 00:25:56.647 Completion Queue Entry Size 00:25:56.647 Max: 16 00:25:56.647 Min: 16 00:25:56.647 Number of Namespaces: 1024 00:25:56.647 Compare Command: Not Supported 00:25:56.647 Write Uncorrectable Command: Not Supported 00:25:56.647 Dataset Management Command: Supported 00:25:56.647 Write Zeroes Command: Supported 00:25:56.647 Set Features Save Field: Not Supported 00:25:56.647 Reservations: Not Supported 00:25:56.647 Timestamp: Not Supported 00:25:56.647 Copy: Not Supported 00:25:56.647 Volatile Write Cache: Present 00:25:56.647 Atomic Write Unit (Normal): 1 00:25:56.647 Atomic Write Unit (PFail): 1 00:25:56.647 Atomic Compare & Write Unit: 1 00:25:56.647 Fused Compare & Write: Not Supported 00:25:56.647 Scatter-Gather List 00:25:56.647 SGL Command Set: Supported 00:25:56.647 SGL Keyed: Not Supported 00:25:56.647 SGL Bit Bucket Descriptor: Not Supported 00:25:56.647 SGL Metadata Pointer: Not Supported 00:25:56.647 Oversized SGL: Not Supported 00:25:56.647 SGL Metadata Address: Not Supported 00:25:56.647 SGL Offset: Supported 00:25:56.647 Transport SGL Data Block: Not Supported 00:25:56.647 Replay Protected Memory Block: Not Supported 00:25:56.647 00:25:56.647 Firmware Slot Information 00:25:56.647 ========================= 00:25:56.647 Active slot: 0 00:25:56.647 00:25:56.647 Asymmetric Namespace Access 00:25:56.647 =========================== 00:25:56.647 Change Count : 0 00:25:56.647 Number of ANA Group Descriptors : 1 00:25:56.647 ANA Group Descriptor : 0 00:25:56.647 ANA Group ID : 1 00:25:56.647 Number of NSID Values : 1 00:25:56.647 Change Count : 0 00:25:56.647 ANA State : 1 00:25:56.647 Namespace Identifier : 1 00:25:56.647 00:25:56.647 Commands Supported and Effects 00:25:56.647 ============================== 00:25:56.647 Admin Commands 00:25:56.647 -------------- 00:25:56.647 Get Log Page (02h): Supported 00:25:56.647 Identify (06h): Supported 00:25:56.647 Abort (08h): Supported 00:25:56.647 Set Features (09h): Supported 00:25:56.647 Get Features (0Ah): Supported 00:25:56.647 Asynchronous Event Request (0Ch): Supported 00:25:56.647 Keep Alive (18h): Supported 00:25:56.647 I/O Commands 00:25:56.647 ------------ 00:25:56.647 Flush (00h): Supported 00:25:56.647 Write (01h): Supported LBA-Change 00:25:56.647 Read (02h): Supported 00:25:56.647 Write Zeroes (08h): Supported LBA-Change 00:25:56.647 Dataset Management (09h): Supported 00:25:56.647 00:25:56.647 Error Log 00:25:56.647 ========= 00:25:56.647 Entry: 0 00:25:56.647 Error Count: 0x3 00:25:56.647 Submission Queue Id: 0x0 00:25:56.647 Command Id: 0x5 00:25:56.647 Phase Bit: 0 00:25:56.647 Status Code: 0x2 00:25:56.647 Status Code Type: 0x0 00:25:56.647 Do Not Retry: 1 00:25:56.647 Error Location: 0x28 00:25:56.647 LBA: 0x0 00:25:56.647 Namespace: 0x0 00:25:56.647 Vendor Log Page: 0x0 00:25:56.647 ----------- 00:25:56.647 Entry: 1 00:25:56.647 Error Count: 0x2 00:25:56.647 Submission Queue Id: 0x0 00:25:56.647 Command Id: 0x5 00:25:56.647 Phase Bit: 0 00:25:56.647 Status Code: 0x2 00:25:56.647 Status Code Type: 0x0 00:25:56.647 Do Not Retry: 1 00:25:56.647 Error Location: 0x28 00:25:56.647 LBA: 0x0 00:25:56.648 Namespace: 0x0 00:25:56.648 Vendor Log Page: 0x0 00:25:56.648 ----------- 00:25:56.648 Entry: 2 00:25:56.648 Error Count: 0x1 00:25:56.648 Submission Queue Id: 0x0 00:25:56.648 Command Id: 0x4 00:25:56.648 Phase Bit: 0 00:25:56.648 Status Code: 0x2 00:25:56.648 Status Code Type: 0x0 00:25:56.648 Do Not Retry: 1 00:25:56.648 Error Location: 0x28 00:25:56.648 LBA: 0x0 00:25:56.648 Namespace: 0x0 00:25:56.648 Vendor Log Page: 0x0 00:25:56.648 00:25:56.648 Number of Queues 00:25:56.648 ================ 00:25:56.648 Number of I/O Submission Queues: 128 00:25:56.648 Number of I/O Completion Queues: 128 00:25:56.648 00:25:56.648 ZNS Specific Controller Data 00:25:56.648 ============================ 00:25:56.648 Zone Append Size Limit: 0 00:25:56.648 00:25:56.648 00:25:56.648 Active Namespaces 00:25:56.648 ================= 00:25:56.648 get_feature(0x05) failed 00:25:56.648 Namespace ID:1 00:25:56.648 Command Set Identifier: NVM (00h) 00:25:56.648 Deallocate: Supported 00:25:56.648 Deallocated/Unwritten Error: Not Supported 00:25:56.648 Deallocated Read Value: Unknown 00:25:56.648 Deallocate in Write Zeroes: Not Supported 00:25:56.648 Deallocated Guard Field: 0xFFFF 00:25:56.648 Flush: Supported 00:25:56.648 Reservation: Not Supported 00:25:56.648 Namespace Sharing Capabilities: Multiple Controllers 00:25:56.648 Size (in LBAs): 1953525168 (931GiB) 00:25:56.648 Capacity (in LBAs): 1953525168 (931GiB) 00:25:56.648 Utilization (in LBAs): 1953525168 (931GiB) 00:25:56.648 UUID: 4ddcdc2a-1290-4e4c-ba25-a69e419c56b1 00:25:56.648 Thin Provisioning: Not Supported 00:25:56.648 Per-NS Atomic Units: Yes 00:25:56.648 Atomic Boundary Size (Normal): 0 00:25:56.648 Atomic Boundary Size (PFail): 0 00:25:56.648 Atomic Boundary Offset: 0 00:25:56.648 NGUID/EUI64 Never Reused: No 00:25:56.648 ANA group ID: 1 00:25:56.648 Namespace Write Protected: No 00:25:56.648 Number of LBA Formats: 1 00:25:56.648 Current LBA Format: LBA Format #00 00:25:56.648 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:56.648 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.648 rmmod nvme_tcp 00:25:56.648 rmmod nvme_fabrics 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.648 08:24:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.189 08:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:59.189 08:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:59.189 08:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:59.189 08:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:59.189 08:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:59.189 08:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:59.189 08:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:59.190 08:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:59.190 08:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:59.190 08:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:59.190 08:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:01.723 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:01.723 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:02.660 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:02.660 00:26:02.660 real 0m15.523s 00:26:02.660 user 0m3.870s 00:26:02.660 sys 0m8.043s 00:26:02.660 08:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.660 08:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.660 ************************************ 00:26:02.660 END TEST nvmf_identify_kernel_target 00:26:02.660 ************************************ 00:26:02.661 08:24:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:02.661 08:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:02.661 08:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.661 08:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.661 ************************************ 00:26:02.661 START TEST nvmf_auth_host 00:26:02.661 ************************************ 00:26:02.661 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:02.661 * Looking for test storage... 00:26:02.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.661 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:02.661 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:02.661 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:02.920 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:02.920 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:02.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.921 --rc genhtml_branch_coverage=1 00:26:02.921 --rc genhtml_function_coverage=1 00:26:02.921 --rc genhtml_legend=1 00:26:02.921 --rc geninfo_all_blocks=1 00:26:02.921 --rc geninfo_unexecuted_blocks=1 00:26:02.921 00:26:02.921 ' 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:02.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.921 --rc genhtml_branch_coverage=1 00:26:02.921 --rc genhtml_function_coverage=1 00:26:02.921 --rc genhtml_legend=1 00:26:02.921 --rc geninfo_all_blocks=1 00:26:02.921 --rc geninfo_unexecuted_blocks=1 00:26:02.921 00:26:02.921 ' 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:02.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.921 --rc genhtml_branch_coverage=1 00:26:02.921 --rc genhtml_function_coverage=1 00:26:02.921 --rc genhtml_legend=1 00:26:02.921 --rc geninfo_all_blocks=1 00:26:02.921 --rc geninfo_unexecuted_blocks=1 00:26:02.921 00:26:02.921 ' 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:02.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.921 --rc genhtml_branch_coverage=1 00:26:02.921 --rc genhtml_function_coverage=1 00:26:02.921 --rc genhtml_legend=1 00:26:02.921 --rc geninfo_all_blocks=1 00:26:02.921 --rc geninfo_unexecuted_blocks=1 00:26:02.921 00:26:02.921 ' 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:02.921 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.922 08:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.922 08:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.922 08:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.922 08:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.922 08:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.194 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:08.195 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:08.195 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:08.195 Found net devices under 0000:86:00.0: cvl_0_0 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:08.195 Found net devices under 0000:86:00.1: cvl_0_1 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:08.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:26:08.195 00:26:08.195 --- 10.0.0.2 ping statistics --- 00:26:08.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.195 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:26:08.195 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:26:08.455 00:26:08.455 --- 10.0.0.1 ping statistics --- 00:26:08.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.455 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1491418 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1491418 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1491418 ']' 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.455 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.715 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d3dcb420ed0e673795df801463de95e2 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QEq 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d3dcb420ed0e673795df801463de95e2 0 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d3dcb420ed0e673795df801463de95e2 0 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d3dcb420ed0e673795df801463de95e2 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QEq 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QEq 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.QEq 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b64e6cf42a23a09812836ce222a48eaa91e9c9c6c4f2fc68c339b35124fb0ff5 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ubW 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b64e6cf42a23a09812836ce222a48eaa91e9c9c6c4f2fc68c339b35124fb0ff5 3 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b64e6cf42a23a09812836ce222a48eaa91e9c9c6c4f2fc68c339b35124fb0ff5 3 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b64e6cf42a23a09812836ce222a48eaa91e9c9c6c4f2fc68c339b35124fb0ff5 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ubW 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ubW 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ubW 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c4b47d26af9e3bcce62d37ae7d6a2627b0ab77850b703f0f 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oMd 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c4b47d26af9e3bcce62d37ae7d6a2627b0ab77850b703f0f 0 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c4b47d26af9e3bcce62d37ae7d6a2627b0ab77850b703f0f 0 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c4b47d26af9e3bcce62d37ae7d6a2627b0ab77850b703f0f 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oMd 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oMd 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.oMd 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a0aca9df8d8da5256a25941f9b6bbd866eabe9faf30730d9 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5OJ 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a0aca9df8d8da5256a25941f9b6bbd866eabe9faf30730d9 2 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a0aca9df8d8da5256a25941f9b6bbd866eabe9faf30730d9 2 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a0aca9df8d8da5256a25941f9b6bbd866eabe9faf30730d9 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:08.716 08:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5OJ 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5OJ 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.5OJ 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c77aefc38e1b408cb40003056a5268dd 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vOa 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c77aefc38e1b408cb40003056a5268dd 1 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c77aefc38e1b408cb40003056a5268dd 1 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c77aefc38e1b408cb40003056a5268dd 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vOa 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vOa 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.vOa 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=232ab112172890544b1ea00d4f1d21fd 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HtI 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 232ab112172890544b1ea00d4f1d21fd 1 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 232ab112172890544b1ea00d4f1d21fd 1 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=232ab112172890544b1ea00d4f1d21fd 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HtI 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HtI 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.HtI 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=20b3957f3509c7811e4f1ea2d323db353855ef20ffc3373d 00:26:08.976 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.rrd 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 20b3957f3509c7811e4f1ea2d323db353855ef20ffc3373d 2 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 20b3957f3509c7811e4f1ea2d323db353855ef20ffc3373d 2 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=20b3957f3509c7811e4f1ea2d323db353855ef20ffc3373d 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.rrd 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.rrd 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.rrd 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=48eafff79368fa5fb98d54fd94229f54 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nOE 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 48eafff79368fa5fb98d54fd94229f54 0 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 48eafff79368fa5fb98d54fd94229f54 0 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=48eafff79368fa5fb98d54fd94229f54 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:08.977 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nOE 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nOE 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.nOE 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f92bffd0ce28fbed3a57342279b1efd81c9b0947a5efd96a868478c732d7f121 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.H1a 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f92bffd0ce28fbed3a57342279b1efd81c9b0947a5efd96a868478c732d7f121 3 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f92bffd0ce28fbed3a57342279b1efd81c9b0947a5efd96a868478c732d7f121 3 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f92bffd0ce28fbed3a57342279b1efd81c9b0947a5efd96a868478c732d7f121 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.H1a 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.H1a 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.H1a 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1491418 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1491418 ']' 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.236 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QEq 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ubW ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ubW 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.oMd 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.5OJ ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5OJ 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.vOa 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.HtI ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HtI 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.rrd 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.nOE ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.nOE 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.H1a 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.495 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:09.496 08:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:12.030 Waiting for block devices as requested 00:26:12.030 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:12.289 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:12.289 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:12.289 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:12.547 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:12.547 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:12.547 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:12.547 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:12.805 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:12.805 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:12.805 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:12.805 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:13.063 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:13.063 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:13.063 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:13.321 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:13.321 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:13.888 No valid GPT data, bailing 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:13.888 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:14.147 00:26:14.147 Discovery Log Number of Records 2, Generation counter 2 00:26:14.147 =====Discovery Log Entry 0====== 00:26:14.147 trtype: tcp 00:26:14.147 adrfam: ipv4 00:26:14.147 subtype: current discovery subsystem 00:26:14.147 treq: not specified, sq flow control disable supported 00:26:14.147 portid: 1 00:26:14.147 trsvcid: 4420 00:26:14.147 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:14.147 traddr: 10.0.0.1 00:26:14.147 eflags: none 00:26:14.147 sectype: none 00:26:14.147 =====Discovery Log Entry 1====== 00:26:14.147 trtype: tcp 00:26:14.147 adrfam: ipv4 00:26:14.147 subtype: nvme subsystem 00:26:14.147 treq: not specified, sq flow control disable supported 00:26:14.147 portid: 1 00:26:14.147 trsvcid: 4420 00:26:14.147 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:14.148 traddr: 10.0.0.1 00:26:14.148 eflags: none 00:26:14.148 sectype: none 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.148 nvme0n1 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.148 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.408 nvme0n1 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.408 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.409 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.409 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.409 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.409 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.409 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.409 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.409 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.409 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.409 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.409 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.409 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.668 nvme0n1 00:26:14.668 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.669 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.928 nvme0n1 00:26:14.928 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.928 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.928 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.928 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.928 08:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:14.928 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.929 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.188 nvme0n1 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.188 nvme0n1 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.188 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.448 nvme0n1 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.448 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.708 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.709 nvme0n1 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.709 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.968 08:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.968 nvme0n1 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.968 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.969 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.228 nvme0n1 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.228 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.229 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.488 nvme0n1 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.488 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.489 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.489 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.489 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.748 nvme0n1 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.748 08:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.749 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.008 nvme0n1 00:26:17.008 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.008 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.008 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.008 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.008 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.268 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.269 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.528 nvme0n1 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:17.528 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.529 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.788 nvme0n1 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.788 08:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.047 nvme0n1 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.047 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.615 nvme0n1 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.615 08:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.183 nvme0n1 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.183 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.442 nvme0n1 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.442 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.443 08:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.024 nvme0n1 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.024 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.283 nvme0n1 00:26:20.283 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.283 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.283 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.283 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.283 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.283 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.542 08:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.109 nvme0n1 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.109 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.110 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.677 nvme0n1 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.677 08:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.246 nvme0n1 00:26:22.246 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.246 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.246 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.246 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.246 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.246 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.246 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.246 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.246 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.246 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.505 08:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.073 nvme0n1 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.073 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.639 nvme0n1 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.640 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.899 nvme0n1 00:26:23.899 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.899 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.899 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.899 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.899 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.899 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.899 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.899 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.899 08:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.899 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.900 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.159 nvme0n1 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.159 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.159 nvme0n1 00:26:24.160 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.160 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.160 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.160 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.160 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.160 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.418 nvme0n1 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.418 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.677 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.678 nvme0n1 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.678 08:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.937 nvme0n1 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:24.937 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.938 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.198 nvme0n1 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.198 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.456 nvme0n1 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.456 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.714 nvme0n1 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:25.714 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.715 08:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.974 nvme0n1 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.974 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.233 nvme0n1 00:26:26.233 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.233 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.233 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.233 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.233 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.233 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.233 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.233 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.233 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.233 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.491 nvme0n1 00:26:26.491 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.751 08:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.010 nvme0n1 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.010 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.011 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.011 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.011 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.011 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.011 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.011 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.270 nvme0n1 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.270 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.529 nvme0n1 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.529 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:27.530 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.530 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.530 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.530 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.530 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:27.530 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:27.530 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.530 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.530 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:27.530 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.788 08:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.047 nvme0n1 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.047 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.615 nvme0n1 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.615 08:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.874 nvme0n1 00:26:28.874 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.874 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.874 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.874 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.874 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.874 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.135 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.394 nvme0n1 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.394 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.653 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.653 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.653 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.653 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.653 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.653 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.653 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.653 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:29.653 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.653 08:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.912 nvme0n1 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.912 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.913 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.913 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.913 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.913 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.913 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.913 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.480 nvme0n1 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:30.481 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.739 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.740 08:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.308 nvme0n1 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.308 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.875 nvme0n1 00:26:31.875 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.875 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.875 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.875 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.875 08:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:31.875 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.876 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.444 nvme0n1 00:26:32.444 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.444 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.444 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.444 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.444 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.444 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.445 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.704 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.704 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.704 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.704 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.704 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.704 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.704 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.704 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.704 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:32.704 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.704 08:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.272 nvme0n1 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.272 nvme0n1 00:26:33.272 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.273 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.273 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.273 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.273 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.273 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.273 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.273 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.273 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.273 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.532 nvme0n1 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.532 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.533 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.792 nvme0n1 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:33.792 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.793 08:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.052 nvme0n1 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.052 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.311 nvme0n1 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.311 nvme0n1 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.311 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.570 nvme0n1 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.570 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.829 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.830 08:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.830 nvme0n1 00:26:34.830 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.830 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.830 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.830 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.830 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.830 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.830 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.830 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.830 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.830 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.089 nvme0n1 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.089 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.090 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.090 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.090 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.090 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.090 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.090 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.090 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.090 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.090 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.090 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.349 nvme0n1 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.349 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.607 nvme0n1 00:26:35.607 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.607 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.607 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.607 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.607 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.607 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.607 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.607 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.607 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.607 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:35.865 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.866 08:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.124 nvme0n1 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.124 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.383 nvme0n1 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.383 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.641 nvme0n1 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.641 08:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.898 nvme0n1 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.899 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.157 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.417 nvme0n1 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.417 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.985 nvme0n1 00:26:37.985 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.985 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.985 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.985 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.985 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.985 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.985 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.985 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.985 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.985 08:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.985 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.244 nvme0n1 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:38.244 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.245 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.812 nvme0n1 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.813 08:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.072 nvme0n1 00:26:39.072 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.072 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.072 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.072 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.072 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.072 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.330 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.330 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.330 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.330 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.330 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.330 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDNkY2I0MjBlZDBlNjczNzk1ZGY4MDE0NjNkZTk1ZTKuldDr: 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: ]] 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY0ZTZjZjQyYTIzYTA5ODEyODM2Y2UyMjJhNDhlYWE5MWU5YzljNmM0ZjJmYzY4YzMzOWIzNTEyNGZiMGZmNZtX/Zo=: 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.331 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.898 nvme0n1 00:26:39.898 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.898 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.898 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.898 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.898 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.898 08:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.898 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.899 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.467 nvme0n1 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.467 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:40.468 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.468 08:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.035 nvme0n1 00:26:41.035 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.035 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.035 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.035 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.035 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjBiMzk1N2YzNTA5Yzc4MTFlNGYxZWEyZDMyM2RiMzUzODU1ZWYyMGZmYzMzNzNklbKHTQ==: 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: ]] 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDhlYWZmZjc5MzY4ZmE1ZmI5OGQ1NGZkOTQyMjlmNTSNukVc: 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.294 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.862 nvme0n1 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkyYmZmZDBjZTI4ZmJlZDNhNTczNDIyNzliMWVmZDgxYzliMDk0N2E1ZWZkOTZhODY4NDc4YzczMmQ3ZjEyMcwMTok=: 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.862 08:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.430 nvme0n1 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:42.430 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.431 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:42.431 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.431 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:42.431 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.431 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.431 request: 00:26:42.431 { 00:26:42.431 "name": "nvme0", 00:26:42.431 "trtype": "tcp", 00:26:42.431 "traddr": "10.0.0.1", 00:26:42.431 "adrfam": "ipv4", 00:26:42.431 "trsvcid": "4420", 00:26:42.431 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:42.431 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:42.431 "prchk_reftag": false, 00:26:42.431 "prchk_guard": false, 00:26:42.431 "hdgst": false, 00:26:42.431 "ddgst": false, 00:26:42.431 "allow_unrecognized_csi": false, 00:26:42.431 "method": "bdev_nvme_attach_controller", 00:26:42.431 "req_id": 1 00:26:42.431 } 00:26:42.431 Got JSON-RPC error response 00:26:42.431 response: 00:26:42.431 { 00:26:42.431 "code": -5, 00:26:42.431 "message": "Input/output error" 00:26:42.431 } 00:26:42.431 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:42.431 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:42.431 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:42.690 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.691 request: 00:26:42.691 { 00:26:42.691 "name": "nvme0", 00:26:42.691 "trtype": "tcp", 00:26:42.691 "traddr": "10.0.0.1", 00:26:42.691 "adrfam": "ipv4", 00:26:42.691 "trsvcid": "4420", 00:26:42.691 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:42.691 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:42.691 "prchk_reftag": false, 00:26:42.691 "prchk_guard": false, 00:26:42.691 "hdgst": false, 00:26:42.691 "ddgst": false, 00:26:42.691 "dhchap_key": "key2", 00:26:42.691 "allow_unrecognized_csi": false, 00:26:42.691 "method": "bdev_nvme_attach_controller", 00:26:42.691 "req_id": 1 00:26:42.691 } 00:26:42.691 Got JSON-RPC error response 00:26:42.691 response: 00:26:42.691 { 00:26:42.691 "code": -5, 00:26:42.691 "message": "Input/output error" 00:26:42.691 } 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.691 request: 00:26:42.691 { 00:26:42.691 "name": "nvme0", 00:26:42.691 "trtype": "tcp", 00:26:42.691 "traddr": "10.0.0.1", 00:26:42.691 "adrfam": "ipv4", 00:26:42.691 "trsvcid": "4420", 00:26:42.691 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:42.691 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:42.691 "prchk_reftag": false, 00:26:42.691 "prchk_guard": false, 00:26:42.691 "hdgst": false, 00:26:42.691 "ddgst": false, 00:26:42.691 "dhchap_key": "key1", 00:26:42.691 "dhchap_ctrlr_key": "ckey2", 00:26:42.691 "allow_unrecognized_csi": false, 00:26:42.691 "method": "bdev_nvme_attach_controller", 00:26:42.691 "req_id": 1 00:26:42.691 } 00:26:42.691 Got JSON-RPC error response 00:26:42.691 response: 00:26:42.691 { 00:26:42.691 "code": -5, 00:26:42.691 "message": "Input/output error" 00:26:42.691 } 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.691 08:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.951 nvme0n1 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.951 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.210 request: 00:26:43.210 { 00:26:43.210 "name": "nvme0", 00:26:43.210 "dhchap_key": "key1", 00:26:43.210 "dhchap_ctrlr_key": "ckey2", 00:26:43.210 "method": "bdev_nvme_set_keys", 00:26:43.210 "req_id": 1 00:26:43.210 } 00:26:43.210 Got JSON-RPC error response 00:26:43.210 response: 00:26:43.210 { 00:26:43.210 "code": -13, 00:26:43.210 "message": "Permission denied" 00:26:43.210 } 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:43.210 08:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:44.147 08:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.147 08:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:44.147 08:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.147 08:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.147 08:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.147 08:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:44.147 08:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRiNDdkMjZhZjllM2JjY2U2MmQzN2FlN2Q2YTI2MjdiMGFiNzc4NTBiNzAzZjBm4eFRHA==: 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: ]] 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBhY2E5ZGY4ZDhkYTUyNTZhMjU5NDFmOWI2YmJkODY2ZWFiZTlmYWYzMDczMGQ50Tsv9w==: 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.525 nvme0n1 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzc3YWVmYzM4ZTFiNDA4Y2I0MDAwMzA1NmE1MjY4ZGSMAh8w: 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: ]] 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjMyYWIxMTIxNzI4OTA1NDRiMWVhMDBkNGYxZDIxZmShtCSF: 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.525 request: 00:26:45.525 { 00:26:45.525 "name": "nvme0", 00:26:45.525 "dhchap_key": "key2", 00:26:45.525 "dhchap_ctrlr_key": "ckey1", 00:26:45.525 "method": "bdev_nvme_set_keys", 00:26:45.525 "req_id": 1 00:26:45.525 } 00:26:45.525 Got JSON-RPC error response 00:26:45.525 response: 00:26:45.525 { 00:26:45.525 "code": -13, 00:26:45.525 "message": "Permission denied" 00:26:45.525 } 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:45.525 08:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:46.720 rmmod nvme_tcp 00:26:46.720 rmmod nvme_fabrics 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1491418 ']' 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1491418 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1491418 ']' 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1491418 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1491418 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1491418' 00:26:46.720 killing process with pid 1491418 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1491418 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1491418 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:46.720 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:46.979 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:46.979 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:46.979 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.979 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.979 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:48.884 08:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:51.417 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:51.417 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:51.677 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:51.677 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:51.677 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:52.244 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:52.504 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.QEq /tmp/spdk.key-null.oMd /tmp/spdk.key-sha256.vOa /tmp/spdk.key-sha384.rrd /tmp/spdk.key-sha512.H1a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:52.504 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:55.037 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:55.037 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:55.037 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:55.037 00:26:55.037 real 0m52.239s 00:26:55.037 user 0m47.249s 00:26:55.037 sys 0m11.628s 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.037 ************************************ 00:26:55.037 END TEST nvmf_auth_host 00:26:55.037 ************************************ 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.037 ************************************ 00:26:55.037 START TEST nvmf_digest 00:26:55.037 ************************************ 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:55.037 * Looking for test storage... 00:26:55.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:55.037 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:55.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.038 --rc genhtml_branch_coverage=1 00:26:55.038 --rc genhtml_function_coverage=1 00:26:55.038 --rc genhtml_legend=1 00:26:55.038 --rc geninfo_all_blocks=1 00:26:55.038 --rc geninfo_unexecuted_blocks=1 00:26:55.038 00:26:55.038 ' 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:55.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.038 --rc genhtml_branch_coverage=1 00:26:55.038 --rc genhtml_function_coverage=1 00:26:55.038 --rc genhtml_legend=1 00:26:55.038 --rc geninfo_all_blocks=1 00:26:55.038 --rc geninfo_unexecuted_blocks=1 00:26:55.038 00:26:55.038 ' 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:55.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.038 --rc genhtml_branch_coverage=1 00:26:55.038 --rc genhtml_function_coverage=1 00:26:55.038 --rc genhtml_legend=1 00:26:55.038 --rc geninfo_all_blocks=1 00:26:55.038 --rc geninfo_unexecuted_blocks=1 00:26:55.038 00:26:55.038 ' 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:55.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.038 --rc genhtml_branch_coverage=1 00:26:55.038 --rc genhtml_function_coverage=1 00:26:55.038 --rc genhtml_legend=1 00:26:55.038 --rc geninfo_all_blocks=1 00:26:55.038 --rc geninfo_unexecuted_blocks=1 00:26:55.038 00:26:55.038 ' 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.038 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:55.297 08:25:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:00.569 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:00.569 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.569 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:00.570 Found net devices under 0000:86:00.0: cvl_0_0 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:00.570 Found net devices under 0000:86:00.1: cvl_0_1 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:00.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:27:00.570 00:27:00.570 --- 10.0.0.2 ping statistics --- 00:27:00.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.570 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:27:00.570 00:27:00.570 --- 10.0.0.1 ping statistics --- 00:27:00.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.570 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.570 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:00.829 ************************************ 00:27:00.829 START TEST nvmf_digest_clean 00:27:00.829 ************************************ 00:27:00.829 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:00.829 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:00.829 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1504972 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1504972 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1504972 ']' 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.830 08:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:00.830 [2024-11-28 08:25:42.901879] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:00.830 [2024-11-28 08:25:42.901923] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.830 [2024-11-28 08:25:42.968935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.830 [2024-11-28 08:25:43.008126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.830 [2024-11-28 08:25:43.008164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.830 [2024-11-28 08:25:43.008170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.830 [2024-11-28 08:25:43.008176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.830 [2024-11-28 08:25:43.008182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.830 [2024-11-28 08:25:43.008746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.830 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.830 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:00.830 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:00.830 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:00.830 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:00.830 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.830 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:00.830 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:00.830 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:00.830 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.830 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:01.089 null0 00:27:01.089 [2024-11-28 08:25:43.171844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.089 [2024-11-28 08:25:43.196062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1505024 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1505024 /var/tmp/bperf.sock 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1505024 ']' 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:01.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.089 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:01.089 [2024-11-28 08:25:43.250723] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:01.089 [2024-11-28 08:25:43.250765] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505024 ] 00:27:01.089 [2024-11-28 08:25:43.313159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.348 [2024-11-28 08:25:43.356445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.348 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.348 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:01.348 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:01.348 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:01.348 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:01.607 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.607 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.866 nvme0n1 00:27:01.866 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:01.866 08:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:01.866 Running I/O for 2 seconds... 00:27:04.180 25421.00 IOPS, 99.30 MiB/s [2024-11-28T07:25:46.449Z] 25053.50 IOPS, 97.87 MiB/s 00:27:04.180 Latency(us) 00:27:04.180 [2024-11-28T07:25:46.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.180 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:04.180 nvme0n1 : 2.00 25066.80 97.92 0.00 0.00 5101.16 2350.75 11169.61 00:27:04.180 [2024-11-28T07:25:46.449Z] =================================================================================================================== 00:27:04.180 [2024-11-28T07:25:46.449Z] Total : 25066.80 97.92 0.00 0.00 5101.16 2350.75 11169.61 00:27:04.180 { 00:27:04.180 "results": [ 00:27:04.180 { 00:27:04.180 "job": "nvme0n1", 00:27:04.180 "core_mask": "0x2", 00:27:04.180 "workload": "randread", 00:27:04.180 "status": "finished", 00:27:04.180 "queue_depth": 128, 00:27:04.180 "io_size": 4096, 00:27:04.180 "runtime": 2.003367, 00:27:04.180 "iops": 25066.800042129074, 00:27:04.180 "mibps": 97.9171876645667, 00:27:04.180 "io_failed": 0, 00:27:04.180 "io_timeout": 0, 00:27:04.180 "avg_latency_us": 5101.157848199243, 00:27:04.180 "min_latency_us": 2350.7478260869566, 00:27:04.180 "max_latency_us": 11169.613913043479 00:27:04.180 } 00:27:04.180 ], 00:27:04.180 "core_count": 1 00:27:04.180 } 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:04.180 | select(.opcode=="crc32c") 00:27:04.180 | "\(.module_name) \(.executed)"' 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1505024 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1505024 ']' 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1505024 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1505024 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1505024' 00:27:04.180 killing process with pid 1505024 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1505024 00:27:04.180 Received shutdown signal, test time was about 2.000000 seconds 00:27:04.180 00:27:04.180 Latency(us) 00:27:04.180 [2024-11-28T07:25:46.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.180 [2024-11-28T07:25:46.449Z] =================================================================================================================== 00:27:04.180 [2024-11-28T07:25:46.449Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:04.180 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1505024 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1505683 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1505683 /var/tmp/bperf.sock 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1505683 ']' 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:04.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:04.440 [2024-11-28 08:25:46.509483] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:04.440 [2024-11-28 08:25:46.509532] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505683 ] 00:27:04.440 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:04.440 Zero copy mechanism will not be used. 00:27:04.440 [2024-11-28 08:25:46.570456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.440 [2024-11-28 08:25:46.611396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:04.440 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:04.699 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.699 08:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.958 nvme0n1 00:27:05.217 08:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:05.217 08:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:05.217 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:05.217 Zero copy mechanism will not be used. 00:27:05.217 Running I/O for 2 seconds... 00:27:07.183 5823.00 IOPS, 727.88 MiB/s [2024-11-28T07:25:49.452Z] 5511.00 IOPS, 688.88 MiB/s 00:27:07.183 Latency(us) 00:27:07.183 [2024-11-28T07:25:49.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.183 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:07.183 nvme0n1 : 2.00 5512.47 689.06 0.00 0.00 2899.72 658.92 11112.63 00:27:07.183 [2024-11-28T07:25:49.452Z] =================================================================================================================== 00:27:07.183 [2024-11-28T07:25:49.452Z] Total : 5512.47 689.06 0.00 0.00 2899.72 658.92 11112.63 00:27:07.183 { 00:27:07.183 "results": [ 00:27:07.183 { 00:27:07.183 "job": "nvme0n1", 00:27:07.183 "core_mask": "0x2", 00:27:07.183 "workload": "randread", 00:27:07.183 "status": "finished", 00:27:07.183 "queue_depth": 16, 00:27:07.183 "io_size": 131072, 00:27:07.183 "runtime": 2.00237, 00:27:07.183 "iops": 5512.467725744992, 00:27:07.183 "mibps": 689.058465718124, 00:27:07.183 "io_failed": 0, 00:27:07.183 "io_timeout": 0, 00:27:07.183 "avg_latency_us": 2899.7228924584633, 00:27:07.183 "min_latency_us": 658.9217391304347, 00:27:07.183 "max_latency_us": 11112.626086956521 00:27:07.183 } 00:27:07.183 ], 00:27:07.183 "core_count": 1 00:27:07.183 } 00:27:07.183 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:07.183 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:07.183 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:07.183 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:07.183 | select(.opcode=="crc32c") 00:27:07.183 | "\(.module_name) \(.executed)"' 00:27:07.183 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1505683 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1505683 ']' 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1505683 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1505683 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:07.522 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1505683' 00:27:07.522 killing process with pid 1505683 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1505683 00:27:07.523 Received shutdown signal, test time was about 2.000000 seconds 00:27:07.523 00:27:07.523 Latency(us) 00:27:07.523 [2024-11-28T07:25:49.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.523 [2024-11-28T07:25:49.792Z] =================================================================================================================== 00:27:07.523 [2024-11-28T07:25:49.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1505683 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1506158 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1506158 /var/tmp/bperf.sock 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1506158 ']' 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:07.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:07.523 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:07.796 [2024-11-28 08:25:49.801204] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:07.796 [2024-11-28 08:25:49.801255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506158 ] 00:27:07.796 [2024-11-28 08:25:49.864777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.796 [2024-11-28 08:25:49.902899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.796 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.796 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:07.796 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:07.796 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:07.796 08:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:08.078 08:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.078 08:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.397 nvme0n1 00:27:08.397 08:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:08.397 08:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:08.692 Running I/O for 2 seconds... 00:27:10.696 26510.00 IOPS, 103.55 MiB/s [2024-11-28T07:25:52.965Z] 26475.00 IOPS, 103.42 MiB/s 00:27:10.696 Latency(us) 00:27:10.696 [2024-11-28T07:25:52.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.696 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:10.696 nvme0n1 : 2.01 26477.04 103.43 0.00 0.00 4827.03 1980.33 6724.56 00:27:10.696 [2024-11-28T07:25:52.965Z] =================================================================================================================== 00:27:10.696 [2024-11-28T07:25:52.965Z] Total : 26477.04 103.43 0.00 0.00 4827.03 1980.33 6724.56 00:27:10.696 { 00:27:10.696 "results": [ 00:27:10.696 { 00:27:10.696 "job": "nvme0n1", 00:27:10.696 "core_mask": "0x2", 00:27:10.696 "workload": "randwrite", 00:27:10.696 "status": "finished", 00:27:10.696 "queue_depth": 128, 00:27:10.696 "io_size": 4096, 00:27:10.696 "runtime": 2.005889, 00:27:10.696 "iops": 26477.03836054737, 00:27:10.696 "mibps": 103.42593109588816, 00:27:10.696 "io_failed": 0, 00:27:10.696 "io_timeout": 0, 00:27:10.696 "avg_latency_us": 4827.029821813627, 00:27:10.696 "min_latency_us": 1980.326956521739, 00:27:10.696 "max_latency_us": 6724.5634782608695 00:27:10.696 } 00:27:10.696 ], 00:27:10.696 "core_count": 1 00:27:10.696 } 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:10.696 | select(.opcode=="crc32c") 00:27:10.696 | "\(.module_name) \(.executed)"' 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1506158 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1506158 ']' 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1506158 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1506158 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1506158' 00:27:10.696 killing process with pid 1506158 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1506158 00:27:10.696 Received shutdown signal, test time was about 2.000000 seconds 00:27:10.696 00:27:10.696 Latency(us) 00:27:10.696 [2024-11-28T07:25:52.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.696 [2024-11-28T07:25:52.965Z] =================================================================================================================== 00:27:10.696 [2024-11-28T07:25:52.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.696 08:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1506158 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1506657 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1506657 /var/tmp/bperf.sock 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1506657 ']' 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:10.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.956 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:10.956 [2024-11-28 08:25:53.168896] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:10.956 [2024-11-28 08:25:53.168945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506657 ] 00:27:10.956 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:10.956 Zero copy mechanism will not be used. 00:27:11.215 [2024-11-28 08:25:53.232170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.215 [2024-11-28 08:25:53.272055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.215 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.215 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:11.215 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:11.215 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:11.215 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:11.474 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.474 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.733 nvme0n1 00:27:11.733 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:11.733 08:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:11.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:11.733 Zero copy mechanism will not be used. 00:27:11.733 Running I/O for 2 seconds... 00:27:14.048 6214.00 IOPS, 776.75 MiB/s [2024-11-28T07:25:56.317Z] 6585.50 IOPS, 823.19 MiB/s 00:27:14.048 Latency(us) 00:27:14.048 [2024-11-28T07:25:56.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.048 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:14.048 nvme0n1 : 2.00 6585.26 823.16 0.00 0.00 2425.70 1510.18 6553.60 00:27:14.048 [2024-11-28T07:25:56.317Z] =================================================================================================================== 00:27:14.048 [2024-11-28T07:25:56.317Z] Total : 6585.26 823.16 0.00 0.00 2425.70 1510.18 6553.60 00:27:14.048 { 00:27:14.048 "results": [ 00:27:14.048 { 00:27:14.048 "job": "nvme0n1", 00:27:14.048 "core_mask": "0x2", 00:27:14.048 "workload": "randwrite", 00:27:14.049 "status": "finished", 00:27:14.049 "queue_depth": 16, 00:27:14.049 "io_size": 131072, 00:27:14.049 "runtime": 2.003261, 00:27:14.049 "iops": 6585.26272912017, 00:27:14.049 "mibps": 823.1578411400212, 00:27:14.049 "io_failed": 0, 00:27:14.049 "io_timeout": 0, 00:27:14.049 "avg_latency_us": 2425.7049932765576, 00:27:14.049 "min_latency_us": 1510.1773913043478, 00:27:14.049 "max_latency_us": 6553.6 00:27:14.049 } 00:27:14.049 ], 00:27:14.049 "core_count": 1 00:27:14.049 } 00:27:14.049 08:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:14.049 08:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:14.049 08:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:14.049 08:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:14.049 | select(.opcode=="crc32c") 00:27:14.049 | "\(.module_name) \(.executed)"' 00:27:14.049 08:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1506657 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1506657 ']' 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1506657 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1506657 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1506657' 00:27:14.049 killing process with pid 1506657 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1506657 00:27:14.049 Received shutdown signal, test time was about 2.000000 seconds 00:27:14.049 00:27:14.049 Latency(us) 00:27:14.049 [2024-11-28T07:25:56.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.049 [2024-11-28T07:25:56.318Z] =================================================================================================================== 00:27:14.049 [2024-11-28T07:25:56.318Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.049 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1506657 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1504972 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1504972 ']' 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1504972 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1504972 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1504972' 00:27:14.309 killing process with pid 1504972 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1504972 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1504972 00:27:14.309 00:27:14.309 real 0m13.731s 00:27:14.309 user 0m26.183s 00:27:14.309 sys 0m4.546s 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.309 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:14.309 ************************************ 00:27:14.309 END TEST nvmf_digest_clean 00:27:14.309 ************************************ 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:14.569 ************************************ 00:27:14.569 START TEST nvmf_digest_error 00:27:14.569 ************************************ 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1507355 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1507355 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1507355 ']' 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.569 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.569 [2024-11-28 08:25:56.699238] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:14.569 [2024-11-28 08:25:56.699280] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.569 [2024-11-28 08:25:56.767262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.569 [2024-11-28 08:25:56.808801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.569 [2024-11-28 08:25:56.808837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.569 [2024-11-28 08:25:56.808845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.569 [2024-11-28 08:25:56.808851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.569 [2024-11-28 08:25:56.808856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.569 [2024-11-28 08:25:56.809419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.828 [2024-11-28 08:25:56.889893] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.828 08:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.828 null0 00:27:14.828 [2024-11-28 08:25:56.981384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.828 [2024-11-28 08:25:57.005567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.828 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.828 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:14.828 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:14.828 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:14.828 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:14.829 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:14.829 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1507383 00:27:14.829 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1507383 /var/tmp/bperf.sock 00:27:14.829 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:14.829 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1507383 ']' 00:27:14.829 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:14.829 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.829 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:14.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:14.829 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.829 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.829 [2024-11-28 08:25:57.059097] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:14.829 [2024-11-28 08:25:57.059141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507383 ] 00:27:15.088 [2024-11-28 08:25:57.121089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.088 [2024-11-28 08:25:57.163928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.088 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.088 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:15.088 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:15.088 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:15.348 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:15.348 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.348 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.348 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.348 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.348 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.609 nvme0n1 00:27:15.609 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:15.609 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.609 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.609 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.609 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:15.609 08:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:15.609 Running I/O for 2 seconds... 00:27:15.609 [2024-11-28 08:25:57.861299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.609 [2024-11-28 08:25:57.861332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.609 [2024-11-28 08:25:57.861347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.609 [2024-11-28 08:25:57.870603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.609 [2024-11-28 08:25:57.870628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.609 [2024-11-28 08:25:57.870656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.868 [2024-11-28 08:25:57.882764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.868 [2024-11-28 08:25:57.882788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.868 [2024-11-28 08:25:57.882800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.868 [2024-11-28 08:25:57.894532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.868 [2024-11-28 08:25:57.894556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.868 [2024-11-28 08:25:57.894568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.868 [2024-11-28 08:25:57.902657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.868 [2024-11-28 08:25:57.902679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.868 [2024-11-28 08:25:57.902690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.868 [2024-11-28 08:25:57.914635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.868 [2024-11-28 08:25:57.914658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.868 [2024-11-28 08:25:57.914670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.868 [2024-11-28 08:25:57.926083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.868 [2024-11-28 08:25:57.926105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.868 [2024-11-28 08:25:57.926117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.868 [2024-11-28 08:25:57.934952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.868 [2024-11-28 08:25:57.934974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.868 [2024-11-28 08:25:57.934986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.868 [2024-11-28 08:25:57.947655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.868 [2024-11-28 08:25:57.947682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.868 [2024-11-28 08:25:57.947693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.868 [2024-11-28 08:25:57.961153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:57.961176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:57.961187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:57.974077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:57.974099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:57.974110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:57.987110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:57.987132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:57.987143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:57.995357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:57.995378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:57.995390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.007592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.007613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.007625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.017742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.017764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.017775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.026975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.026997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.027008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.036525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.036546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.036562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.046666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.046689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.046700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.057205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.057227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.057239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.067322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.067345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.067359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.075721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.075745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.075757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.087125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.087148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.087159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.099952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.099975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.099986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.108361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.108382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.108393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.120186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.120208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.120219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.869 [2024-11-28 08:25:58.129144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:15.869 [2024-11-28 08:25:58.129169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.869 [2024-11-28 08:25:58.129181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.141695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.141717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.141728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.150554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.150575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.150587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.162250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.162271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.162282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.170792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.170813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.170826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.182564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.182585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.182596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.194241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.194262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.194273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.205767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.205788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.205800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.214154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.214176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.214188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.226506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.226527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.226539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.238138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.238159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.238171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.246796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.246817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.246829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.257789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.129 [2024-11-28 08:25:58.257809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.129 [2024-11-28 08:25:58.257820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.129 [2024-11-28 08:25:58.268034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.268056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.268067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.276917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.276938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.276954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.287321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.287344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.287355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.296809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.296830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.296841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.305080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.305101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.305120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.316218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.316239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.316250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.325074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.325095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.325105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.335120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.335142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.335154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.344226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.344247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.344258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.354704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.354726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.354737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.363344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.363366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.363378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.375033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.375054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.375065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.385303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.385325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.385337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.130 [2024-11-28 08:25:58.394700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.130 [2024-11-28 08:25:58.394721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.130 [2024-11-28 08:25:58.394733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.390 [2024-11-28 08:25:58.406377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.390 [2024-11-28 08:25:58.406399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.390 [2024-11-28 08:25:58.406410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.390 [2024-11-28 08:25:58.417853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.390 [2024-11-28 08:25:58.417874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.390 [2024-11-28 08:25:58.417886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.390 [2024-11-28 08:25:58.426742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.390 [2024-11-28 08:25:58.426764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.390 [2024-11-28 08:25:58.426775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.390 [2024-11-28 08:25:58.440095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.390 [2024-11-28 08:25:58.440117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.390 [2024-11-28 08:25:58.440128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.390 [2024-11-28 08:25:58.449738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.390 [2024-11-28 08:25:58.449760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.390 [2024-11-28 08:25:58.449771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.390 [2024-11-28 08:25:58.458765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.390 [2024-11-28 08:25:58.458785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.390 [2024-11-28 08:25:58.458796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.390 [2024-11-28 08:25:58.468969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.390 [2024-11-28 08:25:58.468990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.469001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.481160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.481181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.481196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.490160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.490181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.490193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.501707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.501728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.501738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.510508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.510529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.510540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.519501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.519522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.519533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.529873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.529894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.529905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.539527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.539549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.539560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.549086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.549107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.549119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.558018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.558046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.558058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.570006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.570031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.570043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.579078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.579099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.579110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.587852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.587873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.587884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.599410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.599431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.599443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.610785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.610807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.610818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.619762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.619784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.619795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.632141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.632161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.632172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.641132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.641153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.641164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.391 [2024-11-28 08:25:58.654071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.391 [2024-11-28 08:25:58.654093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.391 [2024-11-28 08:25:58.654105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.651 [2024-11-28 08:25:58.665932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.651 [2024-11-28 08:25:58.665963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.651 [2024-11-28 08:25:58.665975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.651 [2024-11-28 08:25:58.674730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.674752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.674763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.686657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.686679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.686690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.694700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.694721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.694733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.704850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.704871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.704882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.715851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.715872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.715884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.725678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.725702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.725714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.735171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.735194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.735205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.745290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.745313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.745328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.754076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.754098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.754109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.764183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.764205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.764216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.773655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.773678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.773688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.783084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.783106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.783118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.792496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.792519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.792530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.802407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.802430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.802441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.812286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.812307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.812318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.821285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.821306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.821316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.832849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.832874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.832886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 24451.00 IOPS, 95.51 MiB/s [2024-11-28T07:25:58.921Z] [2024-11-28 08:25:58.846585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.846608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.846619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.856040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.856062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.856073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.864397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.864418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.652 [2024-11-28 08:25:58.864429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.652 [2024-11-28 08:25:58.876596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.652 [2024-11-28 08:25:58.876618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-11-28 08:25:58.876628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.653 [2024-11-28 08:25:58.887692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.653 [2024-11-28 08:25:58.887714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-11-28 08:25:58.887726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.653 [2024-11-28 08:25:58.895858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.653 [2024-11-28 08:25:58.895880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-11-28 08:25:58.895891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.653 [2024-11-28 08:25:58.907069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.653 [2024-11-28 08:25:58.907090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.653 [2024-11-28 08:25:58.907101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:58.920304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:58.920327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:58.920343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:58.928884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:58.928906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:58.928917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:58.941334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:58.941356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:58.941368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:58.951183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:58.951207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:58.951219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:58.961225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:58.961246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:58.961257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:58.970823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:58.970846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:58.970856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:58.979590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:58.979612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:58.979622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:58.989159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:58.989182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:58.989194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:58.999449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:58.999472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:58.999483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:59.008321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:59.008346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:59.008357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:59.017005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:59.017027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:59.017038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:59.028800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:59.028823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:59.028834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:59.039466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:59.039487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:59.039502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:59.049007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:59.049029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:59.049040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:59.057792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:59.057815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:59.057826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:59.070127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:59.070148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:59.070159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:59.079138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:59.079162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:59.079174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.914 [2024-11-28 08:25:59.088456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.914 [2024-11-28 08:25:59.088479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.914 [2024-11-28 08:25:59.088491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.915 [2024-11-28 08:25:59.098780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.915 [2024-11-28 08:25:59.098801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.915 [2024-11-28 08:25:59.098812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.915 [2024-11-28 08:25:59.109038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.915 [2024-11-28 08:25:59.109060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.915 [2024-11-28 08:25:59.109072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.915 [2024-11-28 08:25:59.118629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.915 [2024-11-28 08:25:59.118651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.915 [2024-11-28 08:25:59.118663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.915 [2024-11-28 08:25:59.127638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.915 [2024-11-28 08:25:59.127659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.915 [2024-11-28 08:25:59.127670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.915 [2024-11-28 08:25:59.136068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.915 [2024-11-28 08:25:59.136089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.915 [2024-11-28 08:25:59.136100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.915 [2024-11-28 08:25:59.147047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.915 [2024-11-28 08:25:59.147070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.915 [2024-11-28 08:25:59.147082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.915 [2024-11-28 08:25:59.158029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.915 [2024-11-28 08:25:59.158051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.915 [2024-11-28 08:25:59.158062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.915 [2024-11-28 08:25:59.167396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.915 [2024-11-28 08:25:59.167418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.915 [2024-11-28 08:25:59.167429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.915 [2024-11-28 08:25:59.177264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:16.915 [2024-11-28 08:25:59.177287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.915 [2024-11-28 08:25:59.177303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.175 [2024-11-28 08:25:59.188166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.175 [2024-11-28 08:25:59.188189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.175 [2024-11-28 08:25:59.188200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.198914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.198936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.198952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.207825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.207846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.207857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.217788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.217809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.217820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.229337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.229359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.229370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.238020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.238042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.238053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.249654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.249676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.249687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.259349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.259370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.259382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.267286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.267310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.267321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.277306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.277327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.277338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.290885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.290906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.290918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.302108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.302130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.302141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.311824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.311844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.311854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.320128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.320148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.320159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.330117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.330138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.330149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.340782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.340803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.340814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.349703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.349725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.349740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.359161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.359183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.359194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.372264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.372287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.372298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.384215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.384236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.384247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.392929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.392957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.392969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.405256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.405277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.405288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.418015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.418037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.418048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.176 [2024-11-28 08:25:59.430337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.176 [2024-11-28 08:25:59.430358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.176 [2024-11-28 08:25:59.430369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.436 [2024-11-28 08:25:59.444332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.436 [2024-11-28 08:25:59.444354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.436 [2024-11-28 08:25:59.444365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.436 [2024-11-28 08:25:59.457160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.436 [2024-11-28 08:25:59.457185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.436 [2024-11-28 08:25:59.457197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.436 [2024-11-28 08:25:59.468474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.436 [2024-11-28 08:25:59.468495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.436 [2024-11-28 08:25:59.468506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.436 [2024-11-28 08:25:59.477468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.436 [2024-11-28 08:25:59.477489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.436 [2024-11-28 08:25:59.477500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.436 [2024-11-28 08:25:59.488580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.436 [2024-11-28 08:25:59.488601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.436 [2024-11-28 08:25:59.488612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.436 [2024-11-28 08:25:59.500865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.436 [2024-11-28 08:25:59.500885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.436 [2024-11-28 08:25:59.500897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.436 [2024-11-28 08:25:59.510014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.436 [2024-11-28 08:25:59.510035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.436 [2024-11-28 08:25:59.510046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.436 [2024-11-28 08:25:59.521837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.436 [2024-11-28 08:25:59.521858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.436 [2024-11-28 08:25:59.521869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.436 [2024-11-28 08:25:59.533870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.436 [2024-11-28 08:25:59.533891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.436 [2024-11-28 08:25:59.533902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.436 [2024-11-28 08:25:59.542706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.436 [2024-11-28 08:25:59.542727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.436 [2024-11-28 08:25:59.542738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.436 [2024-11-28 08:25:59.555410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.436 [2024-11-28 08:25:59.555431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.555441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.567746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.567768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.567780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.579930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.579956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.579968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.591531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.591551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.591562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.604229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.604251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.604262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.612827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.612848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.612859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.625148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.625170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.625181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.638143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.638164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.638176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.651398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.651420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.651438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.659887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.659908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.659919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.672350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.672371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.672382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.684455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.684478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.684488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.437 [2024-11-28 08:25:59.696014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.437 [2024-11-28 08:25:59.696034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.437 [2024-11-28 08:25:59.696045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.704916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.704937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.704953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.718399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.718421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.718432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.730636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.730658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.730668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.743699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.743721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.743732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.752038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.752063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.752073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.763437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.763460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.763472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.773975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.773997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.774008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.783227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.783249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.783261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.792400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.792421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.792432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.803087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.803109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.803120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.811455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.811476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.811486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.823472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.823494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.823505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.834361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.834384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.834395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 [2024-11-28 08:25:59.842617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x127a6b0) 00:27:17.697 [2024-11-28 08:25:59.842638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.697 [2024-11-28 08:25:59.842650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.697 24241.50 IOPS, 94.69 MiB/s 00:27:17.697 Latency(us) 00:27:17.697 [2024-11-28T07:25:59.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.697 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:17.697 nvme0n1 : 2.04 23782.17 92.90 0.00 0.00 5270.40 2621.44 44450.50 00:27:17.697 [2024-11-28T07:25:59.966Z] =================================================================================================================== 00:27:17.697 [2024-11-28T07:25:59.966Z] Total : 23782.17 92.90 0.00 0.00 5270.40 2621.44 44450.50 00:27:17.697 { 00:27:17.697 "results": [ 00:27:17.697 { 00:27:17.697 "job": "nvme0n1", 00:27:17.697 "core_mask": "0x2", 00:27:17.697 "workload": "randread", 00:27:17.697 "status": "finished", 00:27:17.697 "queue_depth": 128, 00:27:17.697 "io_size": 4096, 00:27:17.697 "runtime": 2.04401, 00:27:17.697 "iops": 23782.17327703876, 00:27:17.697 "mibps": 92.89911436343266, 00:27:17.697 "io_failed": 0, 00:27:17.697 "io_timeout": 0, 00:27:17.697 "avg_latency_us": 5270.399565530435, 00:27:17.697 "min_latency_us": 2621.44, 00:27:17.697 "max_latency_us": 44450.504347826085 00:27:17.697 } 00:27:17.697 ], 00:27:17.697 "core_count": 1 00:27:17.697 } 00:27:17.697 08:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:17.697 08:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:17.697 08:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:17.697 | .driver_specific 00:27:17.697 | .nvme_error 00:27:17.698 | .status_code 00:27:17.698 | .command_transient_transport_error' 00:27:17.698 08:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:17.956 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 190 > 0 )) 00:27:17.957 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1507383 00:27:17.957 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1507383 ']' 00:27:17.957 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1507383 00:27:17.957 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:17.957 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:17.957 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507383 00:27:17.957 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:17.957 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:17.957 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507383' 00:27:17.957 killing process with pid 1507383 00:27:17.957 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1507383 00:27:17.957 Received shutdown signal, test time was about 2.000000 seconds 00:27:17.957 00:27:17.957 Latency(us) 00:27:17.957 [2024-11-28T07:26:00.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.957 [2024-11-28T07:26:00.226Z] =================================================================================================================== 00:27:17.957 [2024-11-28T07:26:00.226Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:17.957 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1507383 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1507929 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1507929 /var/tmp/bperf.sock 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1507929 ']' 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:18.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.216 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.216 [2024-11-28 08:26:00.400113] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:18.216 [2024-11-28 08:26:00.400165] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507929 ] 00:27:18.216 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:18.216 Zero copy mechanism will not be used. 00:27:18.216 [2024-11-28 08:26:00.464615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.476 [2024-11-28 08:26:00.505829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.476 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.476 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:18.476 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:18.476 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:18.735 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:18.735 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.735 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.735 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.735 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.735 08:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.994 nvme0n1 00:27:18.994 08:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:18.994 08:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.994 08:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.994 08:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.994 08:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:18.994 08:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:19.255 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:19.255 Zero copy mechanism will not be used. 00:27:19.255 Running I/O for 2 seconds... 00:27:19.255 [2024-11-28 08:26:01.312064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.312105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.312118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.318342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.318371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.318384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.324509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.324533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.324546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.330856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.330879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.330891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.337146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.337169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.337181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.343091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.343114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.343125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.348838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.348865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.348876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.354946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.354974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.354985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.361101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.361124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.361136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.367315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.367338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.367349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.373356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.373379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.373391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.379129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.379152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.379164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.255 [2024-11-28 08:26:01.385282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.255 [2024-11-28 08:26:01.385305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.255 [2024-11-28 08:26:01.385316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.391584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.391607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.391618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.394813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.394835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.394846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.400824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.400846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.400857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.406598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.406622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.406634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.412832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.412856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.412868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.419623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.419647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.419659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.427558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.427581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.427592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.435982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.436006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.436018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.443902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.443926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.443937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.452545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.452570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.452581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.460808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.460832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.460848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.468157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.468182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.468193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.476118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.476142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.476154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.484570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.484594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.484605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.492343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.492367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.492379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.500678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.500702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.500713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.508720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.508744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.508756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.256 [2024-11-28 08:26:01.516322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.256 [2024-11-28 08:26:01.516346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.256 [2024-11-28 08:26:01.516358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.517 [2024-11-28 08:26:01.524282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.517 [2024-11-28 08:26:01.524305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-11-28 08:26:01.524317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.517 [2024-11-28 08:26:01.530779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.517 [2024-11-28 08:26:01.530806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-11-28 08:26:01.530818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.517 [2024-11-28 08:26:01.537573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.517 [2024-11-28 08:26:01.537596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-11-28 08:26:01.537607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.517 [2024-11-28 08:26:01.544225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.517 [2024-11-28 08:26:01.544248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-11-28 08:26:01.544259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.550328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.550350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.550361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.556223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.556245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.556257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.561877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.561899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.561910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.567742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.567764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.567776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.573811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.573834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.573846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.579671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.579694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.579705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.585155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.585178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.585189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.591313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.591335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.591347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.597535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.597558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.597568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.603547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.603570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.603581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.609575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.609598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.609610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.615541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.615563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.615574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.621534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.621557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.621567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.627460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.627483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.627494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.633252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.633275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.633290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.639000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.639023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.639033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.644805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.644829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.644840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.650524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.650547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.650558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.656163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.656187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.656199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.662077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.662100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.662111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.667733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.667757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.667769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.673293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.673316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.673327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.678842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.678865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.678876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.684380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.684403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.684414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.689958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.689981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.689992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.695509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.695532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-11-28 08:26:01.695543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.518 [2024-11-28 08:26:01.701014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.518 [2024-11-28 08:26:01.701037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.701048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.706445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.706467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.706479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.711912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.711935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.711953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.717499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.717521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.717532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.723186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.723209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.723221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.728979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.729001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.729016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.734514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.734537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.734548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.740064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.740087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.740099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.745596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.745619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.745630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.751111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.751134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.751145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.757038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.757062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.757075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.762639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.762665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.762676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.768166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.768189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.768200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.773875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.773898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.773910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.519 [2024-11-28 08:26:01.779749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.519 [2024-11-28 08:26:01.779775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.519 [2024-11-28 08:26:01.779786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.785426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.785448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.785461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.791172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.791196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.791208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.797015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.797038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.797049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.802775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.802798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.802809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.808397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.808420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.808431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.814262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.814285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.814295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.820079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.820102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.820113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.825711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.825734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.825746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.831293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.831316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.831328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.837107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.837130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.837142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.842892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.842914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.842925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.848628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.848651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.848662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.854418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.854440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.854451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.860218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.860240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.860251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.866150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.866173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.866184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.873164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.873187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.873198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.880793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.880818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.880834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.888085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.888110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.888122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.896425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.896450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.896462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.904377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.904401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.904413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.912243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.912267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.912280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.920273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.920297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.920308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.928511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.928536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.928548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.936908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.936932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.936943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.945028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.945052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.945064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.952806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.952834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.952846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.960936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.960967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.960979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.968577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.968601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.968612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.976587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.976611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.976622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.983255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.983280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.983291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.992009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.992033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.992044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:01.996385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:01.996409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:01.996421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:02.002745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:02.002769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:02.002780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.779 [2024-11-28 08:26:02.009506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.779 [2024-11-28 08:26:02.009528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.779 [2024-11-28 08:26:02.009539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.780 [2024-11-28 08:26:02.015765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.780 [2024-11-28 08:26:02.015788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.780 [2024-11-28 08:26:02.015799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.780 [2024-11-28 08:26:02.021892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.780 [2024-11-28 08:26:02.021914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.780 [2024-11-28 08:26:02.021925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.780 [2024-11-28 08:26:02.027604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.780 [2024-11-28 08:26:02.027627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.780 [2024-11-28 08:26:02.027639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.780 [2024-11-28 08:26:02.033454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.780 [2024-11-28 08:26:02.033479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.780 [2024-11-28 08:26:02.033490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.780 [2024-11-28 08:26:02.039237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.780 [2024-11-28 08:26:02.039260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.780 [2024-11-28 08:26:02.039271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.780 [2024-11-28 08:26:02.044411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:19.780 [2024-11-28 08:26:02.044434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.780 [2024-11-28 08:26:02.044445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.050147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.050169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.050180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.055778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.055802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.055814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.061559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.061586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.061598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.067565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.067587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.067598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.073650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.073673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.073684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.079553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.079576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.079588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.086133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.086156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.086168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.092498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.092521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.092531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.097881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.097903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.097914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.101451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.101473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.101484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.107232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.107254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.107265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.112808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.112830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.112841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.118585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.118608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.118620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.124301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.124323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.124334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.129647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.129670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.129682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.135830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.135853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.135864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.142151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.142174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.142186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.148409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.148432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.148443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.041 [2024-11-28 08:26:02.154598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.041 [2024-11-28 08:26:02.154622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.041 [2024-11-28 08:26:02.154633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.160513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.160536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.160551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.166256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.166278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.166289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.171984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.172006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.172018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.177294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.177317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.177329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.182816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.182839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.182850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.188278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.188300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.188312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.193523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.193546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.193557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.199064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.199087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.199098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.204595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.204618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.204634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.210866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.210893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.210904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.217272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.217294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.217305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.222926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.222956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.222969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.228749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.228772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.228783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.234279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.234302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.234313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.239858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.239881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.239892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.245533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.245555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.245566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.251279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.251302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.251314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.257072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.257096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.257107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.262707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.262730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.262741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.268290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.268312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.268323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.274023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.274046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.274056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.279672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.279694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.279707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.284557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.284580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.284592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.287763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.287784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.287795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.293077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.293099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.293110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.298591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.298613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.298625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.042 [2024-11-28 08:26:02.304088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.042 [2024-11-28 08:26:02.304110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.042 [2024-11-28 08:26:02.304126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.303 [2024-11-28 08:26:02.309626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.303 [2024-11-28 08:26:02.309650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.303 [2024-11-28 08:26:02.309661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.303 5023.00 IOPS, 627.88 MiB/s [2024-11-28T07:26:02.572Z] [2024-11-28 08:26:02.316208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.303 [2024-11-28 08:26:02.316231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.303 [2024-11-28 08:26:02.316242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.303 [2024-11-28 08:26:02.322277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.303 [2024-11-28 08:26:02.322300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.322311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.328102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.328124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.328136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.334752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.334775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.334788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.340850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.340873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.340884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.346341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.346364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.346375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.352077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.352100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.352112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.358315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.358338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.358350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.364164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.364187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.364198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.369366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.369389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.369400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.374961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.375000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.375010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.380821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.380844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.380855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.386503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.386528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.386539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.392445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.392467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.392477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.398631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.398654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.398664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.404504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.404527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.404543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.410531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.410555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.410567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.416620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.416643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.416654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.422677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.422699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.422711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.427858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.427880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.427891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.431271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.431292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.431304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.436896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.436918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.436929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.442973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.442994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.443005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.448862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.448885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.448897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.454763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.454790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.454801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.460690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.460712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.460723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.466579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.466600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.466612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.472249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.304 [2024-11-28 08:26:02.472272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.304 [2024-11-28 08:26:02.472283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.304 [2024-11-28 08:26:02.477909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.477932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.477943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.483882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.483904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.483916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.489455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.489476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.489487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.495212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.495236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.495247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.500382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.500405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.500416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.506009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.506031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.506043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.511887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.511910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.511921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.517897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.517920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.517931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.523706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.523728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.523739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.529586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.529607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.529618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.535452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.535475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.535486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.541301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.541323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.541334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.547460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.547482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.547494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.553338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.553360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.553375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.559186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.559209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.559220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.305 [2024-11-28 08:26:02.564644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.305 [2024-11-28 08:26:02.564667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.305 [2024-11-28 08:26:02.564679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.570666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.570689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.570700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.575694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.575716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.575728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.579275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.579298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.579309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.585630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.585654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.585667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.591310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.591333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.591345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.596424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.596447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.596457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.602073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.602098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.602109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.607897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.607920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.607931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.613585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.613608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.613620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.619318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.619341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.619352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.625009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.625032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.625043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.630157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.630180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.630191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.635582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.635604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.635615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.640970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.640992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.641003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.646031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.646054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.646065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.651352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.651375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.651386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.656634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.656657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.656668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.661869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.661891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.661902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.667151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.667173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.667183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.672511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.672533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.672546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.677914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.677936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.677952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.683265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.683287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.683298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.686278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.686300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.686311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.691834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.691856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.566 [2024-11-28 08:26:02.691870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.566 [2024-11-28 08:26:02.697366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.566 [2024-11-28 08:26:02.697387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.697397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.702915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.702936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.702952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.709281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.709303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.709315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.715644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.715668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.715679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.723153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.723175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.723186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.731546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.731568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.731580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.739150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.739173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.739186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.746932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.746960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.746972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.754911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.754933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.754944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.763309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.763332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.763343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.771167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.771190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.771201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.779652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.779675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.779687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.787686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.787708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.787719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.796481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.796504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.796516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.805166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.805189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.805201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.813691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.813713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.813725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.822124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.822147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.822163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.567 [2024-11-28 08:26:02.828748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.567 [2024-11-28 08:26:02.828773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.567 [2024-11-28 08:26:02.828785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.827 [2024-11-28 08:26:02.834969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.827 [2024-11-28 08:26:02.834992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.827 [2024-11-28 08:26:02.835004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.840287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.840311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.840323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.846311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.846335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.846346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.852486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.852509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.852521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.857901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.857924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.857936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.863575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.863598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.863608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.869286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.869308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.869320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.874972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.875000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.875011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.880571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.880592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.880604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.886148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.886170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.886181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.891985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.892008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.892019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.897775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.897797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.897808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.903639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.903662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.903673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.909502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.909525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.909535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.915383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.915406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.915417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.921075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.921098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.921110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.927078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.927101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.927112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.932978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.933000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.933011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.938813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.938836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.938847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.944709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.944732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.944742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.950746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.950768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.950780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.956629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.956652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.956662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.962090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.962113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.828 [2024-11-28 08:26:02.962124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.828 [2024-11-28 08:26:02.967926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.828 [2024-11-28 08:26:02.967955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:02.967967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:02.973690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:02.973713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:02.973728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:02.979365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:02.979387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:02.979398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:02.985051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:02.985074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:02.985085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:02.991184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:02.991206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:02.991218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:02.997090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:02.997111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:02.997122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.002827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.002851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.002862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.008560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.008583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.008593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.014491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.014514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.014525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.020356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.020379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.020390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.026285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.026308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.026319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.032050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.032073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.032084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.038112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.038136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.038146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.044157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.044181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.044192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.049966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.049989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.050000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.055672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.055696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.055707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.061528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.061551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.061563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.067440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.067464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.067476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.073096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.073120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.073136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.078943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.078973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.078984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.084787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.084811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.084822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.829 [2024-11-28 08:26:03.090560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:20.829 [2024-11-28 08:26:03.090584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.829 [2024-11-28 08:26:03.090596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.097226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.097252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.097264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.104643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.104668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.104681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.112543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.112568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.112580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.119299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.119322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.119333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.125393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.125416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.125428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.131381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.131411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.131422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.137202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.137225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.137237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.143065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.143089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.143100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.149254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.149276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.149287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.156031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.156054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.156066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.162808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.162831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.162841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.169459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.169482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.169493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.176271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.176295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.176306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.183045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.183079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.183091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.189822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.091 [2024-11-28 08:26:03.189846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.091 [2024-11-28 08:26:03.189857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:21.091 [2024-11-28 08:26:03.196452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.196476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.196487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.203155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.203178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.203190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.209900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.209924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.209935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.215757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.215780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.215791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.221482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.221505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.221516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.227127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.227150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.227161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.232869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.232891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.232902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.238653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.238677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.238693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.244237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.244260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.244271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.249924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.249954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.249967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.255708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.255731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.255743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.261405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.261429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.261441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.266910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.266934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.266945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.272447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.272470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.272482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.278002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.278025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.278037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.283810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.283833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.283844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.289731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.289758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.289770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.295325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.295349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.295360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.301110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.301134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.301146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:21.092 [2024-11-28 08:26:03.307057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.307081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.307093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:21.092 5099.50 IOPS, 637.44 MiB/s [2024-11-28T07:26:03.361Z] [2024-11-28 08:26:03.314220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c841a0) 00:27:21.092 [2024-11-28 08:26:03.314244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-11-28 08:26:03.314255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.092 00:27:21.092 Latency(us) 00:27:21.092 [2024-11-28T07:26:03.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.092 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:21.092 nvme0n1 : 2.00 5101.51 637.69 0.00 0.00 3132.97 623.30 9289.02 00:27:21.092 [2024-11-28T07:26:03.361Z] =================================================================================================================== 00:27:21.092 [2024-11-28T07:26:03.361Z] Total : 5101.51 637.69 0.00 0.00 3132.97 623.30 9289.02 00:27:21.092 { 00:27:21.092 "results": [ 00:27:21.092 { 00:27:21.092 "job": "nvme0n1", 00:27:21.092 "core_mask": "0x2", 00:27:21.092 "workload": "randread", 00:27:21.092 "status": "finished", 00:27:21.092 "queue_depth": 16, 00:27:21.092 "io_size": 131072, 00:27:21.092 "runtime": 2.002347, 00:27:21.092 "iops": 5101.513374055546, 00:27:21.092 "mibps": 637.6891717569432, 00:27:21.092 "io_failed": 0, 00:27:21.092 "io_timeout": 0, 00:27:21.092 "avg_latency_us": 3132.9697214241633, 00:27:21.092 "min_latency_us": 623.304347826087, 00:27:21.092 "max_latency_us": 9289.015652173914 00:27:21.092 } 00:27:21.092 ], 00:27:21.092 "core_count": 1 00:27:21.092 } 00:27:21.092 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:21.092 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:21.092 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:21.092 | .driver_specific 00:27:21.092 | .nvme_error 00:27:21.092 | .status_code 00:27:21.092 | .command_transient_transport_error' 00:27:21.092 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 330 > 0 )) 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1507929 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1507929 ']' 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1507929 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507929 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507929' 00:27:21.352 killing process with pid 1507929 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1507929 00:27:21.352 Received shutdown signal, test time was about 2.000000 seconds 00:27:21.352 00:27:21.352 Latency(us) 00:27:21.352 [2024-11-28T07:26:03.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.352 [2024-11-28T07:26:03.621Z] =================================================================================================================== 00:27:21.352 [2024-11-28T07:26:03.621Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:21.352 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1507929 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1508541 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1508541 /var/tmp/bperf.sock 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1508541 ']' 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:21.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.612 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:21.613 [2024-11-28 08:26:03.795282] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:21.613 [2024-11-28 08:26:03.795332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508541 ] 00:27:21.613 [2024-11-28 08:26:03.857748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.873 [2024-11-28 08:26:03.896265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.873 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.873 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:21.873 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:21.873 08:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:22.132 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:22.132 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.132 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.132 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.132 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.132 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.392 nvme0n1 00:27:22.392 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:22.392 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.392 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.392 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.392 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:22.392 08:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:22.392 Running I/O for 2 seconds... 00:27:22.392 [2024-11-28 08:26:04.554840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.392 [2024-11-28 08:26:04.555023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.392 [2024-11-28 08:26:04.555055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.392 [2024-11-28 08:26:04.564606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.392 [2024-11-28 08:26:04.564766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.392 [2024-11-28 08:26:04.564787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.392 [2024-11-28 08:26:04.574309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.392 [2024-11-28 08:26:04.574467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.392 [2024-11-28 08:26:04.574487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.392 [2024-11-28 08:26:04.583971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.392 [2024-11-28 08:26:04.584128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.392 [2024-11-28 08:26:04.584148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.392 [2024-11-28 08:26:04.593645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.392 [2024-11-28 08:26:04.593799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.392 [2024-11-28 08:26:04.593819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.392 [2024-11-28 08:26:04.603317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.392 [2024-11-28 08:26:04.603473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.392 [2024-11-28 08:26:04.603493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.392 [2024-11-28 08:26:04.612960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.392 [2024-11-28 08:26:04.613118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.392 [2024-11-28 08:26:04.613137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.392 [2024-11-28 08:26:04.622616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.392 [2024-11-28 08:26:04.622770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.392 [2024-11-28 08:26:04.622789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.392 [2024-11-28 08:26:04.632259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.393 [2024-11-28 08:26:04.632416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.393 [2024-11-28 08:26:04.632436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.393 [2024-11-28 08:26:04.641937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.393 [2024-11-28 08:26:04.642099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.393 [2024-11-28 08:26:04.642118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.393 [2024-11-28 08:26:04.651575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.393 [2024-11-28 08:26:04.651730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.393 [2024-11-28 08:26:04.651749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.661546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.661703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.661722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.671235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.671387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.671406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.680892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.681054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.681074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.690527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.690683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.690701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.700344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.700500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.700520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.709994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.710150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.710169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.719670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.719827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.719846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.729305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.729461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.729480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.738959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.739114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.739133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.748588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.748745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.748768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.758230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.758386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.758405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.767864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.768027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.768046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.777499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.777654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.777673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.787021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.787173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.787192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.796650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.653 [2024-11-28 08:26:04.796803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.653 [2024-11-28 08:26:04.796822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.653 [2024-11-28 08:26:04.806326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.806480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.806500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.654 [2024-11-28 08:26:04.816217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.816370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.816390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.654 [2024-11-28 08:26:04.825853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.826017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.826036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.654 [2024-11-28 08:26:04.835501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.835658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.835678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.654 [2024-11-28 08:26:04.845152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.845308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.845327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.654 [2024-11-28 08:26:04.854816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.854978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.854997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.654 [2024-11-28 08:26:04.864457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.864611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.864630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.654 [2024-11-28 08:26:04.874135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.874293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.874311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.654 [2024-11-28 08:26:04.883773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.883930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.883956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.654 [2024-11-28 08:26:04.893417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.893573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.893592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.654 [2024-11-28 08:26:04.903070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.903225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.903243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.654 [2024-11-28 08:26:04.912738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.654 [2024-11-28 08:26:04.912892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.654 [2024-11-28 08:26:04.912911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.914 [2024-11-28 08:26:04.922695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.914 [2024-11-28 08:26:04.922853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.914 [2024-11-28 08:26:04.922872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.914 [2024-11-28 08:26:04.932396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.914 [2024-11-28 08:26:04.932553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.914 [2024-11-28 08:26:04.932572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.914 [2024-11-28 08:26:04.942019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.914 [2024-11-28 08:26:04.942173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.914 [2024-11-28 08:26:04.942192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.914 [2024-11-28 08:26:04.951639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.914 [2024-11-28 08:26:04.951791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.914 [2024-11-28 08:26:04.951810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.914 [2024-11-28 08:26:04.961266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.914 [2024-11-28 08:26:04.961421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.914 [2024-11-28 08:26:04.961439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.914 [2024-11-28 08:26:04.970898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.914 [2024-11-28 08:26:04.971057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.914 [2024-11-28 08:26:04.971077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.914 [2024-11-28 08:26:04.980531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.914 [2024-11-28 08:26:04.980684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.914 [2024-11-28 08:26:04.980703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.914 [2024-11-28 08:26:04.990168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.914 [2024-11-28 08:26:04.990321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.914 [2024-11-28 08:26:04.990339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.914 [2024-11-28 08:26:04.999787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.914 [2024-11-28 08:26:04.999940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.914 [2024-11-28 08:26:04.999969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.914 [2024-11-28 08:26:05.009424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.914 [2024-11-28 08:26:05.009578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.914 [2024-11-28 08:26:05.009596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.914 [2024-11-28 08:26:05.019049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.914 [2024-11-28 08:26:05.019203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.914 [2024-11-28 08:26:05.019221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.028645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.028800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.028819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.038309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.038465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.038485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.048144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.048298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.048318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.057784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.057938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.057962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.067645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.067798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.067817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.077258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.077412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.077431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.086865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.087035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.087054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.096540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.096693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.096711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.106149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.106305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.106324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.115786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.115942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.115966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.125418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.125571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.125590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.135045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.135199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.135218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.144655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.144808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.144828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.154281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.154427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.154445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.163875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.164038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.164058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.915 [2024-11-28 08:26:05.173495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:22.915 [2024-11-28 08:26:05.173650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.915 [2024-11-28 08:26:05.173668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.174 [2024-11-28 08:26:05.183426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.174 [2024-11-28 08:26:05.183583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.174 [2024-11-28 08:26:05.183602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.174 [2024-11-28 08:26:05.193139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.174 [2024-11-28 08:26:05.193294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.174 [2024-11-28 08:26:05.193313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.174 [2024-11-28 08:26:05.202755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.174 [2024-11-28 08:26:05.202906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.174 [2024-11-28 08:26:05.202925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.174 [2024-11-28 08:26:05.212378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.174 [2024-11-28 08:26:05.212531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.174 [2024-11-28 08:26:05.212549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.174 [2024-11-28 08:26:05.222007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.174 [2024-11-28 08:26:05.222162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.174 [2024-11-28 08:26:05.222181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.174 [2024-11-28 08:26:05.231613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.174 [2024-11-28 08:26:05.231762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.174 [2024-11-28 08:26:05.231779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.174 [2024-11-28 08:26:05.241179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.174 [2024-11-28 08:26:05.241336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.174 [2024-11-28 08:26:05.241354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.174 [2024-11-28 08:26:05.250794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.174 [2024-11-28 08:26:05.250954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.174 [2024-11-28 08:26:05.250976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.174 [2024-11-28 08:26:05.260358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.174 [2024-11-28 08:26:05.260512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.174 [2024-11-28 08:26:05.260530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.174 [2024-11-28 08:26:05.269972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.270128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.270146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.279595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.279749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.279767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.289238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.289393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.289411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.298855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.299014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.299033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.308475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.308624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.308642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.318326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.318481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.318499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.327967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.328120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.328138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.337578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.337739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.337758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.347200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.347352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.347370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.356828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.356987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.357006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.366456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.366612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.366630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.376076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.376231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.376249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.385702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.385856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.385875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.395335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.395490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.395509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.404976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.405129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.405149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.414593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.414748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.414767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.424261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.424413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.424432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.175 [2024-11-28 08:26:05.433877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.175 [2024-11-28 08:26:05.434039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.175 [2024-11-28 08:26:05.434058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.443759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.443915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.443934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.453534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.453689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.453708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.463147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.463301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.463319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.472783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.472938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.472962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.482393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.482548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.482566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.492052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.492206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.492225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.501649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.501802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.501825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.511290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.511442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.511461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.520955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.521111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.521130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.530601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.530754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.530772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.540218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.540372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.540390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 26374.00 IOPS, 103.02 MiB/s [2024-11-28T07:26:05.704Z] [2024-11-28 08:26:05.549832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.550003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.435 [2024-11-28 08:26:05.550022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.435 [2024-11-28 08:26:05.559516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.435 [2024-11-28 08:26:05.559673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.559695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.569352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.569509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.569528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.578979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.579135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.579154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.588576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.588732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.588751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.598190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.598344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.598364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.607793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.607953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.607972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.617441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.617596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.617615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.627062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.627218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.627237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.636659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.636815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.636835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.646269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.646425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.646444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.655830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.655990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.656009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.665445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.665600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.665618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.675048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.675204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.675223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.684672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.684827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.684845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.436 [2024-11-28 08:26:05.694295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.436 [2024-11-28 08:26:05.694449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.436 [2024-11-28 08:26:05.694468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.696 [2024-11-28 08:26:05.704213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.696 [2024-11-28 08:26:05.704370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.696 [2024-11-28 08:26:05.704389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.696 [2024-11-28 08:26:05.714008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.696 [2024-11-28 08:26:05.714167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.696 [2024-11-28 08:26:05.714186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.696 [2024-11-28 08:26:05.723630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.696 [2024-11-28 08:26:05.723783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.696 [2024-11-28 08:26:05.723802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.696 [2024-11-28 08:26:05.733238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.696 [2024-11-28 08:26:05.733394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.696 [2024-11-28 08:26:05.733413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.696 [2024-11-28 08:26:05.743019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.696 [2024-11-28 08:26:05.743176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.696 [2024-11-28 08:26:05.743195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.696 [2024-11-28 08:26:05.752628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.696 [2024-11-28 08:26:05.752783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.696 [2024-11-28 08:26:05.752805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.696 [2024-11-28 08:26:05.762265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.696 [2024-11-28 08:26:05.762421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.696 [2024-11-28 08:26:05.762440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.696 [2024-11-28 08:26:05.771853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.696 [2024-11-28 08:26:05.772015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.696 [2024-11-28 08:26:05.772034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.696 [2024-11-28 08:26:05.781481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.781637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.781656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.791078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.791233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.791252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.800693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.800847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.800865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.810293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.810448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.810468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.820152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.820307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.820327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.829730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.829885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.829904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.839350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.839509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.839527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.848987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.849142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.849161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.858589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.858745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.858763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.868190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.868343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.868362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.877797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.877956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.877975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.887427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.887580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.887599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.897042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.897197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.897217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.906675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.906829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.906850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.916317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.916474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.916492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.925940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.926105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.926124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.935619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.935776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.935795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.945542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.945698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.945717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.697 [2024-11-28 08:26:05.955161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.697 [2024-11-28 08:26:05.955316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.697 [2024-11-28 08:26:05.955335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:05.965075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:05.965233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:05.965252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:05.974823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:05.974979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:05.974997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:05.984444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:05.984598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:05.984617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:05.994079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:05.994235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:05.994254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:06.003734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:06.003890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:06.003909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:06.013386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:06.013540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:06.013559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:06.023027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:06.023181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:06.023201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:06.032564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:06.032718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:06.032737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:06.042210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:06.042364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:06.042383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:06.052084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:06.052242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:06.052261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:06.061697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:06.061852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:06.061872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.957 [2024-11-28 08:26:06.071568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.957 [2024-11-28 08:26:06.071722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.957 [2024-11-28 08:26:06.071742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.081193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.081351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.081370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.090803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.090963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.090987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.100436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.100590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.100609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.110082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.110240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.110259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.119692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.119848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.119867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.129324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.129478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.129498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.138956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.139113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.139131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.148544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.148700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.148719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.158201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.158357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.158376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.167805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.167965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.167984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.177464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.177622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.177640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.187088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.187243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.187262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.196726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.196882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.196900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.206339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.206494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.206513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.958 [2024-11-28 08:26:06.215968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:23.958 [2024-11-28 08:26:06.216123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.958 [2024-11-28 08:26:06.216143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.225839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.226021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.226040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.235590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.235743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.235763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.245208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.245361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.245380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.254881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.255048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.255079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.264517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.264672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.264691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.274147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.274303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.274321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.283749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.283902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.283922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.293367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.293522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.293541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.303013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.303168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.303187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.312661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.312815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.312834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.322514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.322666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.322686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.332130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.332285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.218 [2024-11-28 08:26:06.332303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.218 [2024-11-28 08:26:06.341747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.218 [2024-11-28 08:26:06.341901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.341919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.351363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.351518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.351537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.361047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.361203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.361222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.370678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.370832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.370852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.380296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.380451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.380469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.389905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.390068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.390087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.399541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.399696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.399714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.409156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.409313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.409331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.418756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.418911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.418930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.428366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.428520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.428542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.437972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.438127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.438145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.447565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.447719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.447738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.457168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.457322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.457341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.466790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.466944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.466967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.219 [2024-11-28 08:26:06.476404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.219 [2024-11-28 08:26:06.476560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.219 [2024-11-28 08:26:06.476579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.479 [2024-11-28 08:26:06.486278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.479 [2024-11-28 08:26:06.486430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.479 [2024-11-28 08:26:06.486449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.479 [2024-11-28 08:26:06.496001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.479 [2024-11-28 08:26:06.496154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.479 [2024-11-28 08:26:06.496173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.479 [2024-11-28 08:26:06.505614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.479 [2024-11-28 08:26:06.505768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.479 [2024-11-28 08:26:06.505787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.479 [2024-11-28 08:26:06.515231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.479 [2024-11-28 08:26:06.515390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.479 [2024-11-28 08:26:06.515409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.479 [2024-11-28 08:26:06.524990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.479 [2024-11-28 08:26:06.525147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.479 [2024-11-28 08:26:06.525166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.479 [2024-11-28 08:26:06.534833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.479 [2024-11-28 08:26:06.534993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.479 [2024-11-28 08:26:06.535013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.479 [2024-11-28 08:26:06.544461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43180) with pdu=0x200016efda78 00:27:24.479 [2024-11-28 08:26:06.544616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.479 [2024-11-28 08:26:06.544635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.479 26423.00 IOPS, 103.21 MiB/s 00:27:24.479 Latency(us) 00:27:24.479 [2024-11-28T07:26:06.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.479 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:24.479 nvme0n1 : 2.00 26424.19 103.22 0.00 0.00 4836.17 3675.71 10827.69 00:27:24.479 [2024-11-28T07:26:06.748Z] =================================================================================================================== 00:27:24.479 [2024-11-28T07:26:06.748Z] Total : 26424.19 103.22 0.00 0.00 4836.17 3675.71 10827.69 00:27:24.479 { 00:27:24.479 "results": [ 00:27:24.479 { 00:27:24.479 "job": "nvme0n1", 00:27:24.479 "core_mask": "0x2", 00:27:24.479 "workload": "randwrite", 00:27:24.479 "status": "finished", 00:27:24.479 "queue_depth": 128, 00:27:24.479 "io_size": 4096, 00:27:24.479 "runtime": 2.004754, 00:27:24.479 "iops": 26424.189701080533, 00:27:24.479 "mibps": 103.21949101984583, 00:27:24.479 "io_failed": 0, 00:27:24.479 "io_timeout": 0, 00:27:24.479 "avg_latency_us": 4836.174492950602, 00:27:24.479 "min_latency_us": 3675.7147826086957, 00:27:24.479 "max_latency_us": 10827.686956521738 00:27:24.479 } 00:27:24.479 ], 00:27:24.479 "core_count": 1 00:27:24.479 } 00:27:24.479 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:24.479 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:24.479 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:24.479 | .driver_specific 00:27:24.479 | .nvme_error 00:27:24.479 | .status_code 00:27:24.479 | .command_transient_transport_error' 00:27:24.479 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 207 > 0 )) 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1508541 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1508541 ']' 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1508541 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508541 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508541' 00:27:24.739 killing process with pid 1508541 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1508541 00:27:24.739 Received shutdown signal, test time was about 2.000000 seconds 00:27:24.739 00:27:24.739 Latency(us) 00:27:24.739 [2024-11-28T07:26:07.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.739 [2024-11-28T07:26:07.008Z] =================================================================================================================== 00:27:24.739 [2024-11-28T07:26:07.008Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1508541 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1509021 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1509021 /var/tmp/bperf.sock 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1509021 ']' 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:24.739 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:24.740 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:24.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:24.740 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:24.740 08:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:24.999 [2024-11-28 08:26:07.028241] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:24.999 [2024-11-28 08:26:07.028288] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509021 ] 00:27:24.999 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:24.999 Zero copy mechanism will not be used. 00:27:24.999 [2024-11-28 08:26:07.091378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.999 [2024-11-28 08:26:07.133884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.999 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.999 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:24.999 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:24.999 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:25.257 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:25.257 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.257 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:25.257 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.257 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.257 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.516 nvme0n1 00:27:25.516 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:25.516 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.516 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:25.516 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.516 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:25.516 08:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:25.516 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:25.516 Zero copy mechanism will not be used. 00:27:25.516 Running I/O for 2 seconds... 00:27:25.776 [2024-11-28 08:26:07.792248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.792355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.792385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.798052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.798147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.798174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.804709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.804849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.804870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.811488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.811656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.811677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.818546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.818645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.818666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.825692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.825848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.825868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.831739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.831820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.831840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.837187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.837278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.837298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.842570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.842680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.842700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.848848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.849018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.849038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.855959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.856089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.856109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.863721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.863872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.863892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.870301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.870368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.870394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.876915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.876983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.877007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.882401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.882530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.882550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.888525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.888615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.888635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.894909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.895008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.895028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.900205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.900302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.900322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.905575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.905675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.776 [2024-11-28 08:26:07.905695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:25.776 [2024-11-28 08:26:07.911211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.776 [2024-11-28 08:26:07.911296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.911317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.916942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.917086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.917105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.923002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.923098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.923121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.928763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.928843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.928863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.934104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.934173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.934194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.939483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.939563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.939582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.945144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.945231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.945250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.950975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.951081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.951101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.957008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.957069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.957093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.963116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.963216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.963236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.969221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.969309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.969328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.974994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.975073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.975093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.980861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.980965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.980984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.986654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.986718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.986741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.993214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.993276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.993298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:07.998771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:07.998836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:07.998857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:08.004748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:08.004811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:08.004833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:08.010550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:08.010643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:08.010663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:08.016491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:08.016558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:08.016580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:08.022286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:08.022454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:08.022478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:08.027966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:08.028043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:08.028063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:08.033219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.777 [2024-11-28 08:26:08.033281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.777 [2024-11-28 08:26:08.033303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:25.777 [2024-11-28 08:26:08.038384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:25.778 [2024-11-28 08:26:08.038485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.778 [2024-11-28 08:26:08.038506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.037 [2024-11-28 08:26:08.044788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.037 [2024-11-28 08:26:08.044858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.037 [2024-11-28 08:26:08.044880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.037 [2024-11-28 08:26:08.050823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.050919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.050940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.055778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.056072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.056094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.061843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.062157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.062179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.067833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.068157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.068179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.072716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.073029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.073050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.077202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.077511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.077531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.081740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.082063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.082084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.086237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.086540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.086561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.090673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.090995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.091015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.095157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.095479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.095500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.099658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.099982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.100003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.104128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.104439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.104459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.108622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.108934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.108963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.113092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.113407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.113427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.117557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.117867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.117887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.122025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.122341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.122362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.126599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.126921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.126943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.131137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.131438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.131459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.135680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.136011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.136031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.140165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.140475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.140496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.144637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.144941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.144968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.149076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.149390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.149414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.153590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.153894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.153915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.158076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.158395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.158415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.162521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.162836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.162856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.166953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.167258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.167278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.038 [2024-11-28 08:26:08.171414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.038 [2024-11-28 08:26:08.171736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.038 [2024-11-28 08:26:08.171756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.175861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.176183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.176204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.180292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.180603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.180624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.184717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.185038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.185059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.189095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.189414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.189435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.193550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.193857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.193877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.198313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.198630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.198651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.203824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.204132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.204152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.209433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.209725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.209746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.215097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.215376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.215397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.220508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.220794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.220814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.226321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.226605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.226625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.231862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.232155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.232177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.237504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.237794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.237815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.242623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.242914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.242935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.247519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.247809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.247830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.252022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.252314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.252335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.256527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.256830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.256851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.261296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.261589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.261609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.266046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.266351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.266372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.270742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.271043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.271063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.275105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.275400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.275425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.279719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.280028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.280049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.284343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.284644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.284664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.289861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.290156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.290177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.295058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.295360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.295382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.039 [2024-11-28 08:26:08.299891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.039 [2024-11-28 08:26:08.300198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.039 [2024-11-28 08:26:08.300219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.300 [2024-11-28 08:26:08.304702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.300 [2024-11-28 08:26:08.305016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.300 [2024-11-28 08:26:08.305038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.300 [2024-11-28 08:26:08.309443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.300 [2024-11-28 08:26:08.309743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.300 [2024-11-28 08:26:08.309764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.300 [2024-11-28 08:26:08.314229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.300 [2024-11-28 08:26:08.314522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.300 [2024-11-28 08:26:08.314542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.300 [2024-11-28 08:26:08.318870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.300 [2024-11-28 08:26:08.319175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.300 [2024-11-28 08:26:08.319196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.323329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.323623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.323644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.327714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.328034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.328054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.331939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.332219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.332240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.336176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.336429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.336450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.340350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.340608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.340629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.344775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.345037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.345058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.349778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.350044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.350065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.354832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.355098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.355119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.359285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.359554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.359574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.364398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.364668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.364688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.369569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.369806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.369826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.373968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.374193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.374213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.378334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.378578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.378598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.382534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.382779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.382799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.386736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.386967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.386987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.390892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.391128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.391147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.395214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.395469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.395493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.399843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.400080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.400100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.404072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.404309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.404329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.408358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.408606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.408626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.412578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.412820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.412840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.416945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.417184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.417203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.421495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.421731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.421752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.425791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.426028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.426048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.429983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.430216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.430236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.434159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.434399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.301 [2024-11-28 08:26:08.434422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.301 [2024-11-28 08:26:08.438291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.301 [2024-11-28 08:26:08.438547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.438568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.442313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.442548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.442568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.447177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.447256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.447276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.452307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.452544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.452564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.456964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.457204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.457225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.461127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.461363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.461383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.465345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.465598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.465618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.469664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.469920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.469940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.473781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.474031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.474051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.478271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.478533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.478553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.482973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.483223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.483243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.486897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.487111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.487131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.490781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.490994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.491014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.494690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.494894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.494914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.498568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.498763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.498783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.502444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.502689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.502709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.506919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.507198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.507219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.512338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.512634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.512654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.517257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.517485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.517505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.521670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.521910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.521930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.526051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.526249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.526269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.530735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.530951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.530972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.534805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.535052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.535073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.539549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.539764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.539784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.544736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.545061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.545081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.550038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.550296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.550323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.555293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.302 [2024-11-28 08:26:08.555499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.302 [2024-11-28 08:26:08.555520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.302 [2024-11-28 08:26:08.560197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.303 [2024-11-28 08:26:08.560519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.303 [2024-11-28 08:26:08.560540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.303 [2024-11-28 08:26:08.565517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.562 [2024-11-28 08:26:08.565752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.562 [2024-11-28 08:26:08.565773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.562 [2024-11-28 08:26:08.571352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.562 [2024-11-28 08:26:08.571633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.562 [2024-11-28 08:26:08.571653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.562 [2024-11-28 08:26:08.576666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.562 [2024-11-28 08:26:08.576888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.576908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.580891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.581143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.581163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.585630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.585814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.585834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.590346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.590562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.590583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.595285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.595510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.595531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.599901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.600116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.600136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.604441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.604664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.604684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.608998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.609247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.609267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.613365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.613624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.613644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.617688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.617874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.617895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.621997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.622235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.622255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.626530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.626744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.626764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.630839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.631090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.631110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.635065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.635310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.635331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.639312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.639556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.639576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.643574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.643818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.643839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.647912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.648141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.648161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.652334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.652544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.652565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.656521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.656754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.656775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.660967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.661162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.661182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.665403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.665666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.563 [2024-11-28 08:26:08.665686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.563 [2024-11-28 08:26:08.669690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.563 [2024-11-28 08:26:08.669913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.669937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.674229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.674478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.674499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.679224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.679413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.679433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.683589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.683788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.683808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.688938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.689201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.689221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.693962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.694161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.694181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.698431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.698637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.698657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.702876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.703123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.703144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.707471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.707711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.707730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.712022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.712227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.712248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.716591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.716834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.716854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.721156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.721338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.721357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.725521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.725728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.725748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.729784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.729988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.730008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.733976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.734161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.734181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.739040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.739370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.739390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.744608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.744875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.744895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.750404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.750602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.750622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.755176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.755434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.755455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.759332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.759585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.759606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.763812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.764050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.564 [2024-11-28 08:26:08.764070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.564 [2024-11-28 08:26:08.768319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.564 [2024-11-28 08:26:08.768564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.768585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.772734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.772874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.772894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.777147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.777416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.777436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.781866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.782080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.782100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.565 6253.00 IOPS, 781.62 MiB/s [2024-11-28T07:26:08.834Z] [2024-11-28 08:26:08.787428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.787633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.787655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.791382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.791590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.791614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.795288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.795532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.795554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.799233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.799446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.799467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.803183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.803398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.803420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.807159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.807370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.807391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.811093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.811306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.811327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.814984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.815175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.815196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.818837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.819057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.819078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.822700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.822894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.822915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.565 [2024-11-28 08:26:08.826804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.565 [2024-11-28 08:26:08.827028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.565 [2024-11-28 08:26:08.827049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.830789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.830999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.831019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.834713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.834923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.834945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.838600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.838813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.838833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.842497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.842701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.842720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.846364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.846566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.846586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.850255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.850455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.850475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.854123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.854366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.854386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.858143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.858349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.858370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.862689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.862853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.862873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.867579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.867770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.867790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.872169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.872372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.872392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.876809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.877019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.877038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.881607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.881793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.881813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.886936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.887145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.887165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.891752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.891958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.891978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.895923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.896125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.825 [2024-11-28 08:26:08.896145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.825 [2024-11-28 08:26:08.899937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.825 [2024-11-28 08:26:08.900173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.900200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.904091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.904290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.904310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.908173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.908356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.908377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.912358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.912552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.912572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.916545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.916734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.916754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.920471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.920673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.920693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.924380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.924582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.924602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.928249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.928455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.928475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.932122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.932331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.932350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.936072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.936289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.936309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.939972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.940179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.940200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.944296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.944495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.944515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.948756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.948969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.948990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.953667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.953860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.953880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.957912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.958108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.958129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.962140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.962255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.962275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.966579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.966778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.966799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.971087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.971288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.971308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.975248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.975447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.975468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.979417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.979606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.979626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.983359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.983546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.983566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.988108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.988311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.988331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.993009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.826 [2024-11-28 08:26:08.993365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.826 [2024-11-28 08:26:08.993386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.826 [2024-11-28 08:26:08.997417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:08.997640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:08.997662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.001581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.001811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.001831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.005746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.005928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.005957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.010036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.010225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.010250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.014203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.014413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.014433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.018591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.018791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.018811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.022985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.023182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.023202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.027076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.027270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.027290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.031567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.031764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.031784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.036365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.036549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.036569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.041539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.041687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.041708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.046447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.046655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.046676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.051132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.051342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.051363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.055685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.055844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.055865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.060293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.060494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.060516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.064612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.064800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.064821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.069467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.069716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.069737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.074385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.074575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.074597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.078663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.078849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.078870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.083677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.083822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.083843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.827 [2024-11-28 08:26:09.088539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:26.827 [2024-11-28 08:26:09.088732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.827 [2024-11-28 08:26:09.088753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.092756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.088 [2024-11-28 08:26:09.092969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.088 [2024-11-28 08:26:09.092990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.096779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.088 [2024-11-28 08:26:09.096986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.088 [2024-11-28 08:26:09.097007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.100758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.088 [2024-11-28 08:26:09.100976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.088 [2024-11-28 08:26:09.100997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.104668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.088 [2024-11-28 08:26:09.104858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.088 [2024-11-28 08:26:09.104879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.108588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.088 [2024-11-28 08:26:09.108794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.088 [2024-11-28 08:26:09.108814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.112467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.088 [2024-11-28 08:26:09.112682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.088 [2024-11-28 08:26:09.112702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.116382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.088 [2024-11-28 08:26:09.116588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.088 [2024-11-28 08:26:09.116608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.120279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.088 [2024-11-28 08:26:09.120484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.088 [2024-11-28 08:26:09.120504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.124472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.088 [2024-11-28 08:26:09.124642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.088 [2024-11-28 08:26:09.124666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.129806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.088 [2024-11-28 08:26:09.130073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.088 [2024-11-28 08:26:09.130095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.135099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.088 [2024-11-28 08:26:09.135414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.088 [2024-11-28 08:26:09.135435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.088 [2024-11-28 08:26:09.140346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.140615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.140636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.145531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.145826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.145848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.150972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.151204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.151224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.156261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.156545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.156566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.161789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.162068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.162091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.167126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.167402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.167424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.172408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.172630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.172651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.178133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.178389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.178410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.183476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.183713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.183734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.188878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.189189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.189210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.194483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.194744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.194765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.200527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.200803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.200824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.207767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.208036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.208057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.214690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.214849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.214868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.220677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.220858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.220878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.225742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.225961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.225982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.230478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.230590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.230611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.235614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.235798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.235818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.240271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.240439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.240459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.245154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.245328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.245347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.250034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.250140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.250160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.255169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.255363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.255384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.260083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.260252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.260272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.265094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.265252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.265276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.270127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.270310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.270330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.275063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.275247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.275267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.280002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.280205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.089 [2024-11-28 08:26:09.280224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.089 [2024-11-28 08:26:09.284894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.089 [2024-11-28 08:26:09.285069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.285089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.289882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.290081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.290101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.294679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.294865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.294885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.299425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.299598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.299618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.304839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.305056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.305077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.310006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.310190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.310216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.315102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.315330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.315350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.320360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.320632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.320653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.327291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.327502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.327522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.333059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.333294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.333314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.337301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.337496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.337516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.341242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.341445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.341466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.345160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.345377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.345398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.349102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.349317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.349338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.090 [2024-11-28 08:26:09.353088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.090 [2024-11-28 08:26:09.353299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.090 [2024-11-28 08:26:09.353319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.351 [2024-11-28 08:26:09.357004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.351 [2024-11-28 08:26:09.357256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.351 [2024-11-28 08:26:09.357277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.351 [2024-11-28 08:26:09.361390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.351 [2024-11-28 08:26:09.361655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.351 [2024-11-28 08:26:09.361676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.351 [2024-11-28 08:26:09.365732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.351 [2024-11-28 08:26:09.365980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.351 [2024-11-28 08:26:09.366001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.351 [2024-11-28 08:26:09.369929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.351 [2024-11-28 08:26:09.370138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.351 [2024-11-28 08:26:09.370158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.351 [2024-11-28 08:26:09.374260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.351 [2024-11-28 08:26:09.374503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.351 [2024-11-28 08:26:09.374524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.351 [2024-11-28 08:26:09.378824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.351 [2024-11-28 08:26:09.379080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.351 [2024-11-28 08:26:09.379100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.351 [2024-11-28 08:26:09.383323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.351 [2024-11-28 08:26:09.383555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.351 [2024-11-28 08:26:09.383575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.351 [2024-11-28 08:26:09.387735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.351 [2024-11-28 08:26:09.387968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.351 [2024-11-28 08:26:09.387988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.351 [2024-11-28 08:26:09.392167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.351 [2024-11-28 08:26:09.392352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.392373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.396591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.396858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.396879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.401026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.401225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.401246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.405135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.405348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.405368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.409995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.410241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.410261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.415286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.415563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.415583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.420196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.420452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.420472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.425382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.425651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.425672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.430702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.430898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.430922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.436191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.436448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.436469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.441658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.441834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.441854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.446980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.447244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.447265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.452504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.452612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.452631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.458222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.458536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.458557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.463550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.463767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.463787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.469224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.469497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.469518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.474930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.475223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.475243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.480672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.480899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.480919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.486051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.486315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.486336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.491598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.491916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.491937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.497098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.497328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.497348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.502397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.502679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.502700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.508001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.508215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.508236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.513662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.513849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.513869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.519168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.519434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.519455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.523476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.523672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.523693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.527807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.528011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.352 [2024-11-28 08:26:09.528031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.352 [2024-11-28 08:26:09.532297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.352 [2024-11-28 08:26:09.532553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.532573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.536742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.536956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.536977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.540790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.541033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.541054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.545526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.545763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.545783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.550744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.551089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.551110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.555446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.555665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.555685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.560618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.560908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.560929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.566375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.566575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.566600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.573007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.573281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.573302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.579903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.580100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.580121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.585049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.585222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.585242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.589855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.590077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.590097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.594407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.594609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.594629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.598420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.598618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.598638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.602341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.602542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.602562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.606262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.606482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.606501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.610178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.610377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.610397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.353 [2024-11-28 08:26:09.614212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.353 [2024-11-28 08:26:09.614426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.353 [2024-11-28 08:26:09.614446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.618218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.618433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.618453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.622170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.622382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.622402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.626105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.626329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.626349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.629997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.630198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.630219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.633873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.634092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.634112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.637750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.637976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.637996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.641661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.641871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.641891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.645537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.645739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.645759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.649442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.649637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.649657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.653302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.653522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.653542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.657172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.657378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.657399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.661043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.661254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.661274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.664909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.665118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.665138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.668772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.668984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.669004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.672881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.673059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.673079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.677842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.678066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.678089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.682677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.682861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.682881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.686845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.614 [2024-11-28 08:26:09.687050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.614 [2024-11-28 08:26:09.687071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.614 [2024-11-28 08:26:09.691131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.691331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.691351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.695310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.695518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.695538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.699342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.699555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.699575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.703481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.703688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.703708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.708163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.708363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.708383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.713662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.713840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.713861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.718004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.718192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.718212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.722149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.722352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.722373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.726311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.726506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.726526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.730614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.730808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.730828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.734816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.735022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.735043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.738807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.739018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.739038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.743153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.743340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.743360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.747129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.747324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.747344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.751058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.751270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.751290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.754993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.755195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.755216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.759216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.759390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.759409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.764249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.764427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.764447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.769094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.769296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.769316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.773294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.773479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.773499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.777423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.777633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.777653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.781572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.781764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.781784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.615 [2024-11-28 08:26:09.785934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd43660) with pdu=0x200016eff3c8 00:27:27.615 [2024-11-28 08:26:09.786130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.615 [2024-11-28 08:26:09.786149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.615 6500.00 IOPS, 812.50 MiB/s 00:27:27.615 Latency(us) 00:27:27.615 [2024-11-28T07:26:09.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.615 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:27.615 nvme0n1 : 2.00 6497.43 812.18 0.00 0.00 2458.22 1802.24 12366.36 00:27:27.615 [2024-11-28T07:26:09.884Z] =================================================================================================================== 00:27:27.615 [2024-11-28T07:26:09.884Z] Total : 6497.43 812.18 0.00 0.00 2458.22 1802.24 12366.36 00:27:27.615 { 00:27:27.615 "results": [ 00:27:27.615 { 00:27:27.615 "job": "nvme0n1", 00:27:27.615 "core_mask": "0x2", 00:27:27.615 "workload": "randwrite", 00:27:27.615 "status": "finished", 00:27:27.615 "queue_depth": 16, 00:27:27.615 "io_size": 131072, 00:27:27.615 "runtime": 2.003714, 00:27:27.615 "iops": 6497.434264570692, 00:27:27.615 "mibps": 812.1792830713365, 00:27:27.615 "io_failed": 0, 00:27:27.615 "io_timeout": 0, 00:27:27.615 "avg_latency_us": 2458.2232154342987, 00:27:27.615 "min_latency_us": 1802.24, 00:27:27.615 "max_latency_us": 12366.358260869565 00:27:27.615 } 00:27:27.615 ], 00:27:27.615 "core_count": 1 00:27:27.615 } 00:27:27.616 08:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:27.616 08:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:27.616 08:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:27.616 | .driver_specific 00:27:27.616 | .nvme_error 00:27:27.616 | .status_code 00:27:27.616 | .command_transient_transport_error' 00:27:27.616 08:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 420 > 0 )) 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1509021 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1509021 ']' 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1509021 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1509021 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1509021' 00:27:27.874 killing process with pid 1509021 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1509021 00:27:27.874 Received shutdown signal, test time was about 2.000000 seconds 00:27:27.874 00:27:27.874 Latency(us) 00:27:27.874 [2024-11-28T07:26:10.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.874 [2024-11-28T07:26:10.143Z] =================================================================================================================== 00:27:27.874 [2024-11-28T07:26:10.143Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:27.874 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1509021 00:27:28.133 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1507355 00:27:28.134 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1507355 ']' 00:27:28.134 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1507355 00:27:28.134 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:28.134 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.134 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507355 00:27:28.134 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:28.134 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:28.134 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507355' 00:27:28.134 killing process with pid 1507355 00:27:28.134 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1507355 00:27:28.134 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1507355 00:27:28.393 00:27:28.393 real 0m13.790s 00:27:28.393 user 0m26.379s 00:27:28.393 sys 0m4.531s 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:28.393 ************************************ 00:27:28.393 END TEST nvmf_digest_error 00:27:28.393 ************************************ 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:28.393 rmmod nvme_tcp 00:27:28.393 rmmod nvme_fabrics 00:27:28.393 rmmod nvme_keyring 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1507355 ']' 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1507355 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1507355 ']' 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1507355 00:27:28.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1507355) - No such process 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1507355 is not found' 00:27:28.393 Process with pid 1507355 is not found 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:28.393 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:28.394 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:28.394 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:28.394 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:28.394 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.394 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.394 08:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:30.930 00:27:30.930 real 0m35.523s 00:27:30.930 user 0m54.334s 00:27:30.930 sys 0m13.320s 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:30.930 ************************************ 00:27:30.930 END TEST nvmf_digest 00:27:30.930 ************************************ 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.930 ************************************ 00:27:30.930 START TEST nvmf_bdevperf 00:27:30.930 ************************************ 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:30.930 * Looking for test storage... 00:27:30.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:30.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.930 --rc genhtml_branch_coverage=1 00:27:30.930 --rc genhtml_function_coverage=1 00:27:30.930 --rc genhtml_legend=1 00:27:30.930 --rc geninfo_all_blocks=1 00:27:30.930 --rc geninfo_unexecuted_blocks=1 00:27:30.930 00:27:30.930 ' 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:30.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.930 --rc genhtml_branch_coverage=1 00:27:30.930 --rc genhtml_function_coverage=1 00:27:30.930 --rc genhtml_legend=1 00:27:30.930 --rc geninfo_all_blocks=1 00:27:30.930 --rc geninfo_unexecuted_blocks=1 00:27:30.930 00:27:30.930 ' 00:27:30.930 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:30.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.930 --rc genhtml_branch_coverage=1 00:27:30.930 --rc genhtml_function_coverage=1 00:27:30.931 --rc genhtml_legend=1 00:27:30.931 --rc geninfo_all_blocks=1 00:27:30.931 --rc geninfo_unexecuted_blocks=1 00:27:30.931 00:27:30.931 ' 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:30.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.931 --rc genhtml_branch_coverage=1 00:27:30.931 --rc genhtml_function_coverage=1 00:27:30.931 --rc genhtml_legend=1 00:27:30.931 --rc geninfo_all_blocks=1 00:27:30.931 --rc geninfo_unexecuted_blocks=1 00:27:30.931 00:27:30.931 ' 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:30.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:30.931 08:26:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:36.201 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:36.201 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:36.201 Found net devices under 0000:86:00.0: cvl_0_0 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:36.201 Found net devices under 0000:86:00.1: cvl_0_1 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.201 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.202 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:36.202 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:36.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:27:36.461 00:27:36.461 --- 10.0.0.2 ping statistics --- 00:27:36.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.461 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:27:36.461 00:27:36.461 --- 10.0.0.1 ping statistics --- 00:27:36.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.461 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:36.461 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1513026 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1513026 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1513026 ']' 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.462 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.462 [2024-11-28 08:26:18.593526] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:36.462 [2024-11-28 08:26:18.593569] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.462 [2024-11-28 08:26:18.658630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:36.462 [2024-11-28 08:26:18.702241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.462 [2024-11-28 08:26:18.702275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.462 [2024-11-28 08:26:18.702285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.462 [2024-11-28 08:26:18.702293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.462 [2024-11-28 08:26:18.702300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.462 [2024-11-28 08:26:18.703805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.462 [2024-11-28 08:26:18.703820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.462 [2024-11-28 08:26:18.703821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.721 [2024-11-28 08:26:18.849845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.721 Malloc0 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.721 [2024-11-28 08:26:18.911858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.721 { 00:27:36.721 "params": { 00:27:36.721 "name": "Nvme$subsystem", 00:27:36.721 "trtype": "$TEST_TRANSPORT", 00:27:36.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.721 "adrfam": "ipv4", 00:27:36.721 "trsvcid": "$NVMF_PORT", 00:27:36.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.721 "hdgst": ${hdgst:-false}, 00:27:36.721 "ddgst": ${ddgst:-false} 00:27:36.721 }, 00:27:36.721 "method": "bdev_nvme_attach_controller" 00:27:36.721 } 00:27:36.721 EOF 00:27:36.721 )") 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:36.721 08:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:36.721 "params": { 00:27:36.721 "name": "Nvme1", 00:27:36.721 "trtype": "tcp", 00:27:36.721 "traddr": "10.0.0.2", 00:27:36.721 "adrfam": "ipv4", 00:27:36.721 "trsvcid": "4420", 00:27:36.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:36.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:36.721 "hdgst": false, 00:27:36.721 "ddgst": false 00:27:36.721 }, 00:27:36.721 "method": "bdev_nvme_attach_controller" 00:27:36.721 }' 00:27:36.721 [2024-11-28 08:26:18.958541] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:36.721 [2024-11-28 08:26:18.958587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513161 ] 00:27:37.026 [2024-11-28 08:26:19.024036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.026 [2024-11-28 08:26:19.066200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.026 Running I/O for 1 seconds... 00:27:38.399 10649.00 IOPS, 41.60 MiB/s 00:27:38.399 Latency(us) 00:27:38.399 [2024-11-28T07:26:20.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.399 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:38.399 Verification LBA range: start 0x0 length 0x4000 00:27:38.399 Nvme1n1 : 1.02 10737.64 41.94 0.00 0.00 11875.94 2350.75 11967.44 00:27:38.399 [2024-11-28T07:26:20.668Z] =================================================================================================================== 00:27:38.399 [2024-11-28T07:26:20.668Z] Total : 10737.64 41.94 0.00 0.00 11875.94 2350.75 11967.44 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1513458 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:38.399 { 00:27:38.399 "params": { 00:27:38.399 "name": "Nvme$subsystem", 00:27:38.399 "trtype": "$TEST_TRANSPORT", 00:27:38.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.399 "adrfam": "ipv4", 00:27:38.399 "trsvcid": "$NVMF_PORT", 00:27:38.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.399 "hdgst": ${hdgst:-false}, 00:27:38.399 "ddgst": ${ddgst:-false} 00:27:38.399 }, 00:27:38.399 "method": "bdev_nvme_attach_controller" 00:27:38.399 } 00:27:38.399 EOF 00:27:38.399 )") 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:38.399 08:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:38.399 "params": { 00:27:38.399 "name": "Nvme1", 00:27:38.399 "trtype": "tcp", 00:27:38.399 "traddr": "10.0.0.2", 00:27:38.399 "adrfam": "ipv4", 00:27:38.399 "trsvcid": "4420", 00:27:38.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:38.399 "hdgst": false, 00:27:38.399 "ddgst": false 00:27:38.399 }, 00:27:38.399 "method": "bdev_nvme_attach_controller" 00:27:38.399 }' 00:27:38.399 [2024-11-28 08:26:20.496759] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:38.399 [2024-11-28 08:26:20.496809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513458 ] 00:27:38.399 [2024-11-28 08:26:20.558611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.399 [2024-11-28 08:26:20.600677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.658 Running I/O for 15 seconds... 00:27:40.975 10892.00 IOPS, 42.55 MiB/s [2024-11-28T07:26:23.506Z] 10945.00 IOPS, 42.75 MiB/s [2024-11-28T07:26:23.506Z] 08:26:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1513026 00:27:41.237 08:26:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:41.237 [2024-11-28 08:26:23.475075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.237 [2024-11-28 08:26:23.475484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.237 [2024-11-28 08:26:23.475503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.237 [2024-11-28 08:26:23.475524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.237 [2024-11-28 08:26:23.475541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.237 [2024-11-28 08:26:23.475559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.237 [2024-11-28 08:26:23.475572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.238 [2024-11-28 08:26:23.475579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.238 [2024-11-28 08:26:23.475595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.238 [2024-11-28 08:26:23.475612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.238 [2024-11-28 08:26:23.475629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.238 [2024-11-28 08:26:23.475648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.238 [2024-11-28 08:26:23.475664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.238 [2024-11-28 08:26:23.475680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.238 [2024-11-28 08:26:23.475696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.475987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.475999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.238 [2024-11-28 08:26:23.476029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.238 [2024-11-28 08:26:23.476277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.238 [2024-11-28 08:26:23.476283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.239 [2024-11-28 08:26:23.476872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.239 [2024-11-28 08:26:23.476880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.476888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.476900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.476908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.476915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.476923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.476929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.476937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.476944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.476957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.476964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.476971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.476978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.476986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.476992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.477006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.240 [2024-11-28 08:26:23.477021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.240 [2024-11-28 08:26:23.477036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.240 [2024-11-28 08:26:23.477050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.240 [2024-11-28 08:26:23.477064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.240 [2024-11-28 08:26:23.477083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.240 [2024-11-28 08:26:23.477097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.240 [2024-11-28 08:26:23.477113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.477128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.477144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.477158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.477174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.477189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.477203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.240 [2024-11-28 08:26:23.477217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.477224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220f6c0 is same with the state(6) to be set 00:27:41.240 [2024-11-28 08:26:23.477232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:41.240 [2024-11-28 08:26:23.477238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:41.240 [2024-11-28 08:26:23.477244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93208 len:8 PRP1 0x0 PRP2 0x0 00:27:41.240 [2024-11-28 08:26:23.477251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.240 [2024-11-28 08:26:23.480098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.240 [2024-11-28 08:26:23.480152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.240 [2024-11-28 08:26:23.480694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.240 [2024-11-28 08:26:23.480711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.240 [2024-11-28 08:26:23.480719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.240 [2024-11-28 08:26:23.480893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.240 [2024-11-28 08:26:23.481072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.240 [2024-11-28 08:26:23.481082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.240 [2024-11-28 08:26:23.481090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.240 [2024-11-28 08:26:23.481098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.240 [2024-11-28 08:26:23.493386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.240 [2024-11-28 08:26:23.493801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.240 [2024-11-28 08:26:23.493819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.240 [2024-11-28 08:26:23.493828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.240 [2024-11-28 08:26:23.494009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.240 [2024-11-28 08:26:23.494186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.240 [2024-11-28 08:26:23.494196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.240 [2024-11-28 08:26:23.494203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.240 [2024-11-28 08:26:23.494210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.501 [2024-11-28 08:26:23.506376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.501 [2024-11-28 08:26:23.506807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.501 [2024-11-28 08:26:23.506824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.501 [2024-11-28 08:26:23.506831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.501 [2024-11-28 08:26:23.507020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.501 [2024-11-28 08:26:23.507195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.501 [2024-11-28 08:26:23.507205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.501 [2024-11-28 08:26:23.507212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.501 [2024-11-28 08:26:23.507218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.501 [2024-11-28 08:26:23.519311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.501 [2024-11-28 08:26:23.519691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.501 [2024-11-28 08:26:23.519712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.501 [2024-11-28 08:26:23.519720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.502 [2024-11-28 08:26:23.519884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.502 [2024-11-28 08:26:23.520095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.502 [2024-11-28 08:26:23.520105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.502 [2024-11-28 08:26:23.520112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.502 [2024-11-28 08:26:23.520120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.502 [2024-11-28 08:26:23.532274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.502 [2024-11-28 08:26:23.532721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.502 [2024-11-28 08:26:23.532766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.502 [2024-11-28 08:26:23.532789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.502 [2024-11-28 08:26:23.533390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.502 [2024-11-28 08:26:23.533956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.502 [2024-11-28 08:26:23.533966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.502 [2024-11-28 08:26:23.533973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.502 [2024-11-28 08:26:23.533979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.502 [2024-11-28 08:26:23.545085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.502 [2024-11-28 08:26:23.545430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.502 [2024-11-28 08:26:23.545474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.502 [2024-11-28 08:26:23.545498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.502 [2024-11-28 08:26:23.545996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.502 [2024-11-28 08:26:23.546180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.502 [2024-11-28 08:26:23.546190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.502 [2024-11-28 08:26:23.546196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.502 [2024-11-28 08:26:23.546202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.502 [2024-11-28 08:26:23.557997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.502 [2024-11-28 08:26:23.558428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.502 [2024-11-28 08:26:23.558472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.502 [2024-11-28 08:26:23.558495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.502 [2024-11-28 08:26:23.558926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.502 [2024-11-28 08:26:23.559125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.502 [2024-11-28 08:26:23.559135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.502 [2024-11-28 08:26:23.559142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.502 [2024-11-28 08:26:23.559149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.502 [2024-11-28 08:26:23.571367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.502 [2024-11-28 08:26:23.571733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.502 [2024-11-28 08:26:23.571777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.502 [2024-11-28 08:26:23.571801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.502 [2024-11-28 08:26:23.572400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.502 [2024-11-28 08:26:23.572964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.502 [2024-11-28 08:26:23.572973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.502 [2024-11-28 08:26:23.572980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.502 [2024-11-28 08:26:23.572987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.502 [2024-11-28 08:26:23.584271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.502 [2024-11-28 08:26:23.584679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.502 [2024-11-28 08:26:23.584696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.502 [2024-11-28 08:26:23.584703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.502 [2024-11-28 08:26:23.584866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.502 [2024-11-28 08:26:23.585058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.502 [2024-11-28 08:26:23.585069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.502 [2024-11-28 08:26:23.585075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.502 [2024-11-28 08:26:23.585082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.502 [2024-11-28 08:26:23.597147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.502 [2024-11-28 08:26:23.597507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.502 [2024-11-28 08:26:23.597550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.502 [2024-11-28 08:26:23.597574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.502 [2024-11-28 08:26:23.598089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.502 [2024-11-28 08:26:23.598255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.502 [2024-11-28 08:26:23.598265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.502 [2024-11-28 08:26:23.598276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.502 [2024-11-28 08:26:23.598283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.502 [2024-11-28 08:26:23.609988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.502 [2024-11-28 08:26:23.610414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.502 [2024-11-28 08:26:23.610470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.502 [2024-11-28 08:26:23.610494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.502 [2024-11-28 08:26:23.611004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.502 [2024-11-28 08:26:23.611196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.502 [2024-11-28 08:26:23.611204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.502 [2024-11-28 08:26:23.611210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.502 [2024-11-28 08:26:23.611216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.502 [2024-11-28 08:26:23.622962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.502 [2024-11-28 08:26:23.623352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.502 [2024-11-28 08:26:23.623369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.502 [2024-11-28 08:26:23.623377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.502 [2024-11-28 08:26:23.623541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.502 [2024-11-28 08:26:23.623706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.502 [2024-11-28 08:26:23.623715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.502 [2024-11-28 08:26:23.623721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.502 [2024-11-28 08:26:23.623727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.502 [2024-11-28 08:26:23.635795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.502 [2024-11-28 08:26:23.636238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.502 [2024-11-28 08:26:23.636284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.502 [2024-11-28 08:26:23.636307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.502 [2024-11-28 08:26:23.636857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.502 [2024-11-28 08:26:23.637038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.502 [2024-11-28 08:26:23.637047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.502 [2024-11-28 08:26:23.637054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.502 [2024-11-28 08:26:23.637060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.502 [2024-11-28 08:26:23.648826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.503 [2024-11-28 08:26:23.649262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.503 [2024-11-28 08:26:23.649308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.503 [2024-11-28 08:26:23.649332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.503 [2024-11-28 08:26:23.649914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.503 [2024-11-28 08:26:23.650125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.503 [2024-11-28 08:26:23.650134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.503 [2024-11-28 08:26:23.650141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.503 [2024-11-28 08:26:23.650147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.503 [2024-11-28 08:26:23.662372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.503 [2024-11-28 08:26:23.662792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.503 [2024-11-28 08:26:23.662836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.503 [2024-11-28 08:26:23.662859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.503 [2024-11-28 08:26:23.663328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.503 [2024-11-28 08:26:23.663504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.503 [2024-11-28 08:26:23.663514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.503 [2024-11-28 08:26:23.663520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.503 [2024-11-28 08:26:23.663527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.503 [2024-11-28 08:26:23.675282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.503 [2024-11-28 08:26:23.675705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.503 [2024-11-28 08:26:23.675722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.503 [2024-11-28 08:26:23.675730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.503 [2024-11-28 08:26:23.675894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.503 [2024-11-28 08:26:23.676089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.503 [2024-11-28 08:26:23.676099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.503 [2024-11-28 08:26:23.676106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.503 [2024-11-28 08:26:23.676113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.503 [2024-11-28 08:26:23.688228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.503 [2024-11-28 08:26:23.688648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.503 [2024-11-28 08:26:23.688664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.503 [2024-11-28 08:26:23.688675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.503 [2024-11-28 08:26:23.688840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.503 [2024-11-28 08:26:23.689012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.503 [2024-11-28 08:26:23.689022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.503 [2024-11-28 08:26:23.689029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.503 [2024-11-28 08:26:23.689035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.503 [2024-11-28 08:26:23.701049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.503 [2024-11-28 08:26:23.701454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.503 [2024-11-28 08:26:23.701499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.503 [2024-11-28 08:26:23.701522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.503 [2024-11-28 08:26:23.702077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.503 [2024-11-28 08:26:23.702243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.503 [2024-11-28 08:26:23.702251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.503 [2024-11-28 08:26:23.702258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.503 [2024-11-28 08:26:23.702263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.503 [2024-11-28 08:26:23.713962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.503 [2024-11-28 08:26:23.714382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.503 [2024-11-28 08:26:23.714431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.503 [2024-11-28 08:26:23.714454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.503 [2024-11-28 08:26:23.715014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.503 [2024-11-28 08:26:23.715180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.503 [2024-11-28 08:26:23.715188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.503 [2024-11-28 08:26:23.715194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.503 [2024-11-28 08:26:23.715200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.503 [2024-11-28 08:26:23.727115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.503 [2024-11-28 08:26:23.727584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.503 [2024-11-28 08:26:23.727630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.503 [2024-11-28 08:26:23.727653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.503 [2024-11-28 08:26:23.728085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.503 [2024-11-28 08:26:23.728263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.503 [2024-11-28 08:26:23.728271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.503 [2024-11-28 08:26:23.728278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.503 [2024-11-28 08:26:23.728284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.503 [2024-11-28 08:26:23.740291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.503 [2024-11-28 08:26:23.740731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.503 [2024-11-28 08:26:23.740751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.503 [2024-11-28 08:26:23.740759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.503 [2024-11-28 08:26:23.740938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.503 [2024-11-28 08:26:23.741127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.503 [2024-11-28 08:26:23.741138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.503 [2024-11-28 08:26:23.741146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.503 [2024-11-28 08:26:23.741155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.503 [2024-11-28 08:26:23.753389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.503 [2024-11-28 08:26:23.753830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.503 [2024-11-28 08:26:23.753848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.503 [2024-11-28 08:26:23.753856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.503 [2024-11-28 08:26:23.754042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.503 [2024-11-28 08:26:23.754230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.503 [2024-11-28 08:26:23.754240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.503 [2024-11-28 08:26:23.754246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.503 [2024-11-28 08:26:23.754253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.503 [2024-11-28 08:26:23.766604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.764 [2024-11-28 08:26:23.767049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.764 [2024-11-28 08:26:23.767068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.764 [2024-11-28 08:26:23.767076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.764 [2024-11-28 08:26:23.767255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.764 [2024-11-28 08:26:23.767435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.764 [2024-11-28 08:26:23.767445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.764 [2024-11-28 08:26:23.767456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.764 [2024-11-28 08:26:23.767463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.764 [2024-11-28 08:26:23.779546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.764 [2024-11-28 08:26:23.779970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.764 [2024-11-28 08:26:23.779987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.764 [2024-11-28 08:26:23.779995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.764 [2024-11-28 08:26:23.780159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.764 [2024-11-28 08:26:23.780322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.764 [2024-11-28 08:26:23.780331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.764 [2024-11-28 08:26:23.780338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.764 [2024-11-28 08:26:23.780344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.764 [2024-11-28 08:26:23.792419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.764 [2024-11-28 08:26:23.792839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.764 [2024-11-28 08:26:23.792856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.764 [2024-11-28 08:26:23.792864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.764 [2024-11-28 08:26:23.793053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.764 [2024-11-28 08:26:23.793229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.764 [2024-11-28 08:26:23.793240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.765 [2024-11-28 08:26:23.793246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.765 [2024-11-28 08:26:23.793253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.765 [2024-11-28 08:26:23.805381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.765 [2024-11-28 08:26:23.805805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.765 [2024-11-28 08:26:23.805849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.765 [2024-11-28 08:26:23.805873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.765 [2024-11-28 08:26:23.806352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.765 [2024-11-28 08:26:23.806528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.765 [2024-11-28 08:26:23.806538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.765 [2024-11-28 08:26:23.806545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.765 [2024-11-28 08:26:23.806551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.765 [2024-11-28 08:26:23.818238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.765 [2024-11-28 08:26:23.818600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.765 [2024-11-28 08:26:23.818618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.765 [2024-11-28 08:26:23.818626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.765 [2024-11-28 08:26:23.818799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.765 [2024-11-28 08:26:23.818979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.765 [2024-11-28 08:26:23.818990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.765 [2024-11-28 08:26:23.818997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.765 [2024-11-28 08:26:23.819003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.765 [2024-11-28 08:26:23.831291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.765 [2024-11-28 08:26:23.831721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.765 [2024-11-28 08:26:23.831738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.765 [2024-11-28 08:26:23.831746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.765 [2024-11-28 08:26:23.831910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.765 [2024-11-28 08:26:23.832102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.765 [2024-11-28 08:26:23.832112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.765 [2024-11-28 08:26:23.832119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.765 [2024-11-28 08:26:23.832126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.765 [2024-11-28 08:26:23.844379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.765 [2024-11-28 08:26:23.844814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.765 [2024-11-28 08:26:23.844831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.765 [2024-11-28 08:26:23.844838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.765 [2024-11-28 08:26:23.845017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.765 [2024-11-28 08:26:23.845191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.765 [2024-11-28 08:26:23.845201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.765 [2024-11-28 08:26:23.845208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.765 [2024-11-28 08:26:23.845214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.765 [2024-11-28 08:26:23.857379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.765 [2024-11-28 08:26:23.857788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.765 [2024-11-28 08:26:23.857805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.765 [2024-11-28 08:26:23.857816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.765 [2024-11-28 08:26:23.857998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.765 [2024-11-28 08:26:23.858173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.765 [2024-11-28 08:26:23.858183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.765 [2024-11-28 08:26:23.858189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.765 [2024-11-28 08:26:23.858196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.765 [2024-11-28 08:26:23.870342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.765 [2024-11-28 08:26:23.870721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.765 [2024-11-28 08:26:23.870739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.765 [2024-11-28 08:26:23.870748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.765 [2024-11-28 08:26:23.870921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.765 [2024-11-28 08:26:23.871104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.765 [2024-11-28 08:26:23.871115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.765 [2024-11-28 08:26:23.871121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.765 [2024-11-28 08:26:23.871128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.765 [2024-11-28 08:26:23.883334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.765 [2024-11-28 08:26:23.883783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.765 [2024-11-28 08:26:23.883828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.765 [2024-11-28 08:26:23.883851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.765 [2024-11-28 08:26:23.884450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.765 [2024-11-28 08:26:23.884826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.765 [2024-11-28 08:26:23.884836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.765 [2024-11-28 08:26:23.884843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.765 [2024-11-28 08:26:23.884849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.765 9309.33 IOPS, 36.36 MiB/s [2024-11-28T07:26:24.034Z] [2024-11-28 08:26:23.897121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.765 [2024-11-28 08:26:23.897555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.765 [2024-11-28 08:26:23.897599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.765 [2024-11-28 08:26:23.897623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.765 [2024-11-28 08:26:23.898227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.765 [2024-11-28 08:26:23.898790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.765 [2024-11-28 08:26:23.898801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.765 [2024-11-28 08:26:23.898807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.765 [2024-11-28 08:26:23.898813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.765 [2024-11-28 08:26:23.910038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.765 [2024-11-28 08:26:23.910394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.765 [2024-11-28 08:26:23.910413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.765 [2024-11-28 08:26:23.910421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.765 [2024-11-28 08:26:23.910585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.765 [2024-11-28 08:26:23.910749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.765 [2024-11-28 08:26:23.910758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.765 [2024-11-28 08:26:23.910765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.765 [2024-11-28 08:26:23.910771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.765 [2024-11-28 08:26:23.922996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.765 [2024-11-28 08:26:23.923338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.765 [2024-11-28 08:26:23.923354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.765 [2024-11-28 08:26:23.923361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.765 [2024-11-28 08:26:23.923535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.766 [2024-11-28 08:26:23.923709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.766 [2024-11-28 08:26:23.923718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.766 [2024-11-28 08:26:23.923725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.766 [2024-11-28 08:26:23.923732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.766 [2024-11-28 08:26:23.935938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.766 [2024-11-28 08:26:23.936369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.766 [2024-11-28 08:26:23.936413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.766 [2024-11-28 08:26:23.936436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.766 [2024-11-28 08:26:23.936884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.766 [2024-11-28 08:26:23.937058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.766 [2024-11-28 08:26:23.937068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.766 [2024-11-28 08:26:23.937078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.766 [2024-11-28 08:26:23.937084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.766 [2024-11-28 08:26:23.949057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.766 [2024-11-28 08:26:23.949502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.766 [2024-11-28 08:26:23.949519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.766 [2024-11-28 08:26:23.949526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.766 [2024-11-28 08:26:23.949690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.766 [2024-11-28 08:26:23.949856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.766 [2024-11-28 08:26:23.949865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.766 [2024-11-28 08:26:23.949871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.766 [2024-11-28 08:26:23.949878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.766 [2024-11-28 08:26:23.961938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.766 [2024-11-28 08:26:23.962366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.766 [2024-11-28 08:26:23.962383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.766 [2024-11-28 08:26:23.962390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.766 [2024-11-28 08:26:23.962554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.766 [2024-11-28 08:26:23.962719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.766 [2024-11-28 08:26:23.962729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.766 [2024-11-28 08:26:23.962735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.766 [2024-11-28 08:26:23.962741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.766 [2024-11-28 08:26:23.974858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.766 [2024-11-28 08:26:23.975286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.766 [2024-11-28 08:26:23.975303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.766 [2024-11-28 08:26:23.975310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.766 [2024-11-28 08:26:23.975474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.766 [2024-11-28 08:26:23.975638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.766 [2024-11-28 08:26:23.975647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.766 [2024-11-28 08:26:23.975654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.766 [2024-11-28 08:26:23.975660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.766 [2024-11-28 08:26:23.987726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.766 [2024-11-28 08:26:23.988087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.766 [2024-11-28 08:26:23.988106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.766 [2024-11-28 08:26:23.988115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.766 [2024-11-28 08:26:23.988279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.766 [2024-11-28 08:26:23.988444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.766 [2024-11-28 08:26:23.988454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.766 [2024-11-28 08:26:23.988461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.766 [2024-11-28 08:26:23.988467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.766 [2024-11-28 08:26:24.000925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.766 [2024-11-28 08:26:24.001368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.766 [2024-11-28 08:26:24.001386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.766 [2024-11-28 08:26:24.001393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.766 [2024-11-28 08:26:24.001568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.766 [2024-11-28 08:26:24.001741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.766 [2024-11-28 08:26:24.001751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.766 [2024-11-28 08:26:24.001757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.766 [2024-11-28 08:26:24.001764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.766 [2024-11-28 08:26:24.013803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.766 [2024-11-28 08:26:24.014235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.766 [2024-11-28 08:26:24.014254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.766 [2024-11-28 08:26:24.014262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.766 [2024-11-28 08:26:24.014426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.766 [2024-11-28 08:26:24.014591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.766 [2024-11-28 08:26:24.014600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.766 [2024-11-28 08:26:24.014607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.766 [2024-11-28 08:26:24.014613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.766 [2024-11-28 08:26:24.026843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.766 [2024-11-28 08:26:24.027289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.766 [2024-11-28 08:26:24.027313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:41.766 [2024-11-28 08:26:24.027321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:41.766 [2024-11-28 08:26:24.027500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:41.766 [2024-11-28 08:26:24.027680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.766 [2024-11-28 08:26:24.027690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.766 [2024-11-28 08:26:24.027696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.766 [2024-11-28 08:26:24.027703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.026 [2024-11-28 08:26:24.039737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.026 [2024-11-28 08:26:24.040175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.026 [2024-11-28 08:26:24.040236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.026 [2024-11-28 08:26:24.040259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.026 [2024-11-28 08:26:24.040790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.026 [2024-11-28 08:26:24.040960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.026 [2024-11-28 08:26:24.040970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.026 [2024-11-28 08:26:24.040994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.026 [2024-11-28 08:26:24.041002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.026 [2024-11-28 08:26:24.052703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.026 [2024-11-28 08:26:24.053150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.026 [2024-11-28 08:26:24.053199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.026 [2024-11-28 08:26:24.053224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.027 [2024-11-28 08:26:24.053810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.027 [2024-11-28 08:26:24.054250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.027 [2024-11-28 08:26:24.054261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.027 [2024-11-28 08:26:24.054268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.027 [2024-11-28 08:26:24.054274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.027 [2024-11-28 08:26:24.065520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.027 [2024-11-28 08:26:24.065915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.027 [2024-11-28 08:26:24.065973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.027 [2024-11-28 08:26:24.065998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.027 [2024-11-28 08:26:24.066592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.027 [2024-11-28 08:26:24.067195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.027 [2024-11-28 08:26:24.067205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.027 [2024-11-28 08:26:24.067212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.027 [2024-11-28 08:26:24.067218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.027 [2024-11-28 08:26:24.078481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.027 [2024-11-28 08:26:24.078910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.027 [2024-11-28 08:26:24.078927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.027 [2024-11-28 08:26:24.078935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.027 [2024-11-28 08:26:24.079129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.027 [2024-11-28 08:26:24.079304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.027 [2024-11-28 08:26:24.079314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.027 [2024-11-28 08:26:24.079321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.027 [2024-11-28 08:26:24.079327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.027 [2024-11-28 08:26:24.091291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.027 [2024-11-28 08:26:24.091729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.027 [2024-11-28 08:26:24.091773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.027 [2024-11-28 08:26:24.091796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.027 [2024-11-28 08:26:24.092260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.027 [2024-11-28 08:26:24.092427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.027 [2024-11-28 08:26:24.092436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.027 [2024-11-28 08:26:24.092443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.027 [2024-11-28 08:26:24.092449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.027 [2024-11-28 08:26:24.104221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.027 [2024-11-28 08:26:24.104644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.027 [2024-11-28 08:26:24.104692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.027 [2024-11-28 08:26:24.104716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.027 [2024-11-28 08:26:24.105261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.027 [2024-11-28 08:26:24.105437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.027 [2024-11-28 08:26:24.105447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.027 [2024-11-28 08:26:24.105458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.027 [2024-11-28 08:26:24.105465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.027 [2024-11-28 08:26:24.117265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.027 [2024-11-28 08:26:24.117630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.027 [2024-11-28 08:26:24.117647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.027 [2024-11-28 08:26:24.117654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.027 [2024-11-28 08:26:24.117818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.027 [2024-11-28 08:26:24.118003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.027 [2024-11-28 08:26:24.118013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.027 [2024-11-28 08:26:24.118020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.027 [2024-11-28 08:26:24.118026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.027 [2024-11-28 08:26:24.130217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.027 [2024-11-28 08:26:24.130647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.027 [2024-11-28 08:26:24.130691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.027 [2024-11-28 08:26:24.130714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.027 [2024-11-28 08:26:24.131134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.027 [2024-11-28 08:26:24.131309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.027 [2024-11-28 08:26:24.131319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.027 [2024-11-28 08:26:24.131326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.027 [2024-11-28 08:26:24.131332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.027 [2024-11-28 08:26:24.143080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.027 [2024-11-28 08:26:24.143503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.027 [2024-11-28 08:26:24.143547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.027 [2024-11-28 08:26:24.143570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.027 [2024-11-28 08:26:24.144169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.027 [2024-11-28 08:26:24.144348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.027 [2024-11-28 08:26:24.144357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.027 [2024-11-28 08:26:24.144363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.027 [2024-11-28 08:26:24.144369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.027 [2024-11-28 08:26:24.155916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.027 [2024-11-28 08:26:24.156342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.027 [2024-11-28 08:26:24.156359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.027 [2024-11-28 08:26:24.156366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.027 [2024-11-28 08:26:24.156531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.027 [2024-11-28 08:26:24.156696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.027 [2024-11-28 08:26:24.156705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.027 [2024-11-28 08:26:24.156712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.027 [2024-11-28 08:26:24.156718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.027 [2024-11-28 08:26:24.168730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.027 [2024-11-28 08:26:24.169155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.027 [2024-11-28 08:26:24.169172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.027 [2024-11-28 08:26:24.169180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.027 [2024-11-28 08:26:24.169344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.027 [2024-11-28 08:26:24.169508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.027 [2024-11-28 08:26:24.169517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.027 [2024-11-28 08:26:24.169523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.027 [2024-11-28 08:26:24.169529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.027 [2024-11-28 08:26:24.181597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.027 [2024-11-28 08:26:24.181955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.027 [2024-11-28 08:26:24.181973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.028 [2024-11-28 08:26:24.181980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.028 [2024-11-28 08:26:24.182145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.028 [2024-11-28 08:26:24.182310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.028 [2024-11-28 08:26:24.182320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.028 [2024-11-28 08:26:24.182326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.028 [2024-11-28 08:26:24.182333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.028 [2024-11-28 08:26:24.194448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.028 [2024-11-28 08:26:24.194860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.028 [2024-11-28 08:26:24.194881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.028 [2024-11-28 08:26:24.194889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.028 [2024-11-28 08:26:24.195078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.028 [2024-11-28 08:26:24.195252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.028 [2024-11-28 08:26:24.195262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.028 [2024-11-28 08:26:24.195269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.028 [2024-11-28 08:26:24.195275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.028 [2024-11-28 08:26:24.207374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.028 [2024-11-28 08:26:24.207710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.028 [2024-11-28 08:26:24.207756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.028 [2024-11-28 08:26:24.207779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.028 [2024-11-28 08:26:24.208325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.028 [2024-11-28 08:26:24.208499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.028 [2024-11-28 08:26:24.208509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.028 [2024-11-28 08:26:24.208516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.028 [2024-11-28 08:26:24.208522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.028 [2024-11-28 08:26:24.220215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.028 [2024-11-28 08:26:24.220654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.028 [2024-11-28 08:26:24.220698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.028 [2024-11-28 08:26:24.220721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.028 [2024-11-28 08:26:24.221319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.028 [2024-11-28 08:26:24.221555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.028 [2024-11-28 08:26:24.221565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.028 [2024-11-28 08:26:24.221572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.028 [2024-11-28 08:26:24.221578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.028 [2024-11-28 08:26:24.233131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.028 [2024-11-28 08:26:24.233548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.028 [2024-11-28 08:26:24.233590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.028 [2024-11-28 08:26:24.233615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.028 [2024-11-28 08:26:24.234223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.028 [2024-11-28 08:26:24.234508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.028 [2024-11-28 08:26:24.234518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.028 [2024-11-28 08:26:24.234524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.028 [2024-11-28 08:26:24.234531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.028 [2024-11-28 08:26:24.246085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.028 [2024-11-28 08:26:24.246448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.028 [2024-11-28 08:26:24.246467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.028 [2024-11-28 08:26:24.246475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.028 [2024-11-28 08:26:24.246649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.028 [2024-11-28 08:26:24.246824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.028 [2024-11-28 08:26:24.246834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.028 [2024-11-28 08:26:24.246840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.028 [2024-11-28 08:26:24.246847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.028 [2024-11-28 08:26:24.259203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.028 [2024-11-28 08:26:24.259547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.028 [2024-11-28 08:26:24.259565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.028 [2024-11-28 08:26:24.259585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.028 [2024-11-28 08:26:24.259801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.028 [2024-11-28 08:26:24.259987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.028 [2024-11-28 08:26:24.259997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.028 [2024-11-28 08:26:24.260003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.028 [2024-11-28 08:26:24.260010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.028 [2024-11-28 08:26:24.272021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.028 [2024-11-28 08:26:24.272424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.028 [2024-11-28 08:26:24.272441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.028 [2024-11-28 08:26:24.272448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.028 [2024-11-28 08:26:24.272612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.028 [2024-11-28 08:26:24.272776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.028 [2024-11-28 08:26:24.272785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.028 [2024-11-28 08:26:24.272795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.028 [2024-11-28 08:26:24.272801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.028 [2024-11-28 08:26:24.284911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.028 [2024-11-28 08:26:24.285320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.028 [2024-11-28 08:26:24.285367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.028 [2024-11-28 08:26:24.285391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.028 [2024-11-28 08:26:24.285907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.028 [2024-11-28 08:26:24.286088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.028 [2024-11-28 08:26:24.286097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.028 [2024-11-28 08:26:24.286106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.028 [2024-11-28 08:26:24.286113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.289 [2024-11-28 08:26:24.297927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.289 [2024-11-28 08:26:24.298367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.289 [2024-11-28 08:26:24.298425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.289 [2024-11-28 08:26:24.298449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.289 [2024-11-28 08:26:24.298993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.289 [2024-11-28 08:26:24.299169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.289 [2024-11-28 08:26:24.299179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.289 [2024-11-28 08:26:24.299185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.289 [2024-11-28 08:26:24.299192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.289 [2024-11-28 08:26:24.310856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.289 [2024-11-28 08:26:24.311278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.289 [2024-11-28 08:26:24.311295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.289 [2024-11-28 08:26:24.311304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.289 [2024-11-28 08:26:24.311468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.289 [2024-11-28 08:26:24.311633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.289 [2024-11-28 08:26:24.311642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.289 [2024-11-28 08:26:24.311648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.289 [2024-11-28 08:26:24.311654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.289 [2024-11-28 08:26:24.323825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.289 [2024-11-28 08:26:24.324246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.289 [2024-11-28 08:26:24.324263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.289 [2024-11-28 08:26:24.324270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.289 [2024-11-28 08:26:24.324434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.289 [2024-11-28 08:26:24.324599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.289 [2024-11-28 08:26:24.324609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.289 [2024-11-28 08:26:24.324615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.289 [2024-11-28 08:26:24.324622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.289 [2024-11-28 08:26:24.336725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.289 [2024-11-28 08:26:24.337145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.289 [2024-11-28 08:26:24.337162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.289 [2024-11-28 08:26:24.337170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.289 [2024-11-28 08:26:24.337334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.289 [2024-11-28 08:26:24.337499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.289 [2024-11-28 08:26:24.337508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.289 [2024-11-28 08:26:24.337514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.289 [2024-11-28 08:26:24.337520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.289 [2024-11-28 08:26:24.349773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.289 [2024-11-28 08:26:24.350198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.289 [2024-11-28 08:26:24.350216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.289 [2024-11-28 08:26:24.350225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.289 [2024-11-28 08:26:24.350398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.289 [2024-11-28 08:26:24.350574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.289 [2024-11-28 08:26:24.350584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.289 [2024-11-28 08:26:24.350590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.289 [2024-11-28 08:26:24.350597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.289 [2024-11-28 08:26:24.362645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.289 [2024-11-28 08:26:24.363052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.289 [2024-11-28 08:26:24.363073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.289 [2024-11-28 08:26:24.363081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.289 [2024-11-28 08:26:24.363245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.289 [2024-11-28 08:26:24.363411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.289 [2024-11-28 08:26:24.363421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.289 [2024-11-28 08:26:24.363427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.289 [2024-11-28 08:26:24.363433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.289 [2024-11-28 08:26:24.375499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.289 [2024-11-28 08:26:24.375924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.289 [2024-11-28 08:26:24.375941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.289 [2024-11-28 08:26:24.375953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.289 [2024-11-28 08:26:24.376142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.289 [2024-11-28 08:26:24.376317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.289 [2024-11-28 08:26:24.376326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.289 [2024-11-28 08:26:24.376333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.289 [2024-11-28 08:26:24.376340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.289 [2024-11-28 08:26:24.388560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.289 [2024-11-28 08:26:24.388993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.289 [2024-11-28 08:26:24.389039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.289 [2024-11-28 08:26:24.389062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.289 [2024-11-28 08:26:24.389647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.289 [2024-11-28 08:26:24.389983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.289 [2024-11-28 08:26:24.389998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.289 [2024-11-28 08:26:24.390008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.289 [2024-11-28 08:26:24.390017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.289 [2024-11-28 08:26:24.402135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.289 [2024-11-28 08:26:24.402498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.289 [2024-11-28 08:26:24.402516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.289 [2024-11-28 08:26:24.402524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.289 [2024-11-28 08:26:24.402707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.289 [2024-11-28 08:26:24.402887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.289 [2024-11-28 08:26:24.402897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.289 [2024-11-28 08:26:24.402904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.290 [2024-11-28 08:26:24.402911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.290 [2024-11-28 08:26:24.415200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.290 [2024-11-28 08:26:24.415617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.290 [2024-11-28 08:26:24.415635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.290 [2024-11-28 08:26:24.415643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.290 [2024-11-28 08:26:24.415822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.290 [2024-11-28 08:26:24.416009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.290 [2024-11-28 08:26:24.416019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.290 [2024-11-28 08:26:24.416026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.290 [2024-11-28 08:26:24.416034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.290 [2024-11-28 08:26:24.428314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.290 [2024-11-28 08:26:24.428663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.290 [2024-11-28 08:26:24.428682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.290 [2024-11-28 08:26:24.428690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.290 [2024-11-28 08:26:24.428869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.290 [2024-11-28 08:26:24.429055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.290 [2024-11-28 08:26:24.429065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.290 [2024-11-28 08:26:24.429072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.290 [2024-11-28 08:26:24.429078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.290 [2024-11-28 08:26:24.441335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.290 [2024-11-28 08:26:24.441765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.290 [2024-11-28 08:26:24.441783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.290 [2024-11-28 08:26:24.441791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.290 [2024-11-28 08:26:24.441970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.290 [2024-11-28 08:26:24.442144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.290 [2024-11-28 08:26:24.442153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.290 [2024-11-28 08:26:24.442163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.290 [2024-11-28 08:26:24.442170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.290 [2024-11-28 08:26:24.454512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.290 [2024-11-28 08:26:24.454794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.290 [2024-11-28 08:26:24.454813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.290 [2024-11-28 08:26:24.454820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.290 [2024-11-28 08:26:24.455003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.290 [2024-11-28 08:26:24.455182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.290 [2024-11-28 08:26:24.455191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.290 [2024-11-28 08:26:24.455198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.290 [2024-11-28 08:26:24.455205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.290 [2024-11-28 08:26:24.467552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.290 [2024-11-28 08:26:24.467963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.290 [2024-11-28 08:26:24.467981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.290 [2024-11-28 08:26:24.467989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.290 [2024-11-28 08:26:24.468163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.290 [2024-11-28 08:26:24.468343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.290 [2024-11-28 08:26:24.468352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.290 [2024-11-28 08:26:24.468359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.290 [2024-11-28 08:26:24.468365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.290 [2024-11-28 08:26:24.480561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.290 [2024-11-28 08:26:24.480848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.290 [2024-11-28 08:26:24.480864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.290 [2024-11-28 08:26:24.480873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.290 [2024-11-28 08:26:24.481043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.290 [2024-11-28 08:26:24.481208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.290 [2024-11-28 08:26:24.481216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.290 [2024-11-28 08:26:24.481222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.290 [2024-11-28 08:26:24.481228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.290 [2024-11-28 08:26:24.493669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.290 [2024-11-28 08:26:24.494015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.290 [2024-11-28 08:26:24.494034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.290 [2024-11-28 08:26:24.494042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.290 [2024-11-28 08:26:24.494216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.290 [2024-11-28 08:26:24.494391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.290 [2024-11-28 08:26:24.494401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.290 [2024-11-28 08:26:24.494408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.290 [2024-11-28 08:26:24.494415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.290 [2024-11-28 08:26:24.506693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.290 [2024-11-28 08:26:24.506962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.290 [2024-11-28 08:26:24.506980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.290 [2024-11-28 08:26:24.506987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.290 [2024-11-28 08:26:24.507151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.290 [2024-11-28 08:26:24.507316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.290 [2024-11-28 08:26:24.507326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.290 [2024-11-28 08:26:24.507332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.290 [2024-11-28 08:26:24.507339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.290 [2024-11-28 08:26:24.519781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.290 [2024-11-28 08:26:24.520225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.290 [2024-11-28 08:26:24.520243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.290 [2024-11-28 08:26:24.520251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.290 [2024-11-28 08:26:24.520429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.290 [2024-11-28 08:26:24.520607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.290 [2024-11-28 08:26:24.520618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.290 [2024-11-28 08:26:24.520625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.290 [2024-11-28 08:26:24.520631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.290 [2024-11-28 08:26:24.532899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.290 [2024-11-28 08:26:24.533232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.290 [2024-11-28 08:26:24.533253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.290 [2024-11-28 08:26:24.533262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.290 [2024-11-28 08:26:24.533440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.290 [2024-11-28 08:26:24.533620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.290 [2024-11-28 08:26:24.533630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.291 [2024-11-28 08:26:24.533636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.291 [2024-11-28 08:26:24.533644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.291 [2024-11-28 08:26:24.546094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.291 [2024-11-28 08:26:24.546478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.291 [2024-11-28 08:26:24.546496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.291 [2024-11-28 08:26:24.546505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.291 [2024-11-28 08:26:24.546689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.291 [2024-11-28 08:26:24.546875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.291 [2024-11-28 08:26:24.546885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.291 [2024-11-28 08:26:24.546892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.291 [2024-11-28 08:26:24.546899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.551 [2024-11-28 08:26:24.559314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.551 [2024-11-28 08:26:24.559633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.551 [2024-11-28 08:26:24.559652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.551 [2024-11-28 08:26:24.559661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.551 [2024-11-28 08:26:24.559846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.551 [2024-11-28 08:26:24.560037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.551 [2024-11-28 08:26:24.560047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.551 [2024-11-28 08:26:24.560055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.551 [2024-11-28 08:26:24.560062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.551 [2024-11-28 08:26:24.572385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.551 [2024-11-28 08:26:24.572726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.551 [2024-11-28 08:26:24.572744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.551 [2024-11-28 08:26:24.572752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.551 [2024-11-28 08:26:24.572934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.551 [2024-11-28 08:26:24.573119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.552 [2024-11-28 08:26:24.573129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.552 [2024-11-28 08:26:24.573136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.552 [2024-11-28 08:26:24.573143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.552 [2024-11-28 08:26:24.585592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.552 [2024-11-28 08:26:24.585991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.552 [2024-11-28 08:26:24.586010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.552 [2024-11-28 08:26:24.586018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.552 [2024-11-28 08:26:24.586197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.552 [2024-11-28 08:26:24.586376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.552 [2024-11-28 08:26:24.586386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.552 [2024-11-28 08:26:24.586393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.552 [2024-11-28 08:26:24.586400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.552 [2024-11-28 08:26:24.598647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.552 [2024-11-28 08:26:24.598995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.552 [2024-11-28 08:26:24.599012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.552 [2024-11-28 08:26:24.599020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.552 [2024-11-28 08:26:24.599193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.552 [2024-11-28 08:26:24.599366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.552 [2024-11-28 08:26:24.599375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.552 [2024-11-28 08:26:24.599382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.552 [2024-11-28 08:26:24.599389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.552 [2024-11-28 08:26:24.611561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.552 [2024-11-28 08:26:24.611850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.552 [2024-11-28 08:26:24.611867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.552 [2024-11-28 08:26:24.611875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.552 [2024-11-28 08:26:24.612054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.552 [2024-11-28 08:26:24.612244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.552 [2024-11-28 08:26:24.612254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.552 [2024-11-28 08:26:24.612263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.552 [2024-11-28 08:26:24.612270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.552 [2024-11-28 08:26:24.624515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.552 [2024-11-28 08:26:24.624870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.552 [2024-11-28 08:26:24.624888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.552 [2024-11-28 08:26:24.624896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.552 [2024-11-28 08:26:24.625087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.552 [2024-11-28 08:26:24.625262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.552 [2024-11-28 08:26:24.625272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.552 [2024-11-28 08:26:24.625278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.552 [2024-11-28 08:26:24.625284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.552 [2024-11-28 08:26:24.637499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.552 [2024-11-28 08:26:24.637847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.552 [2024-11-28 08:26:24.637865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.552 [2024-11-28 08:26:24.637873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.552 [2024-11-28 08:26:24.638062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.552 [2024-11-28 08:26:24.638238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.552 [2024-11-28 08:26:24.638248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.552 [2024-11-28 08:26:24.638255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.552 [2024-11-28 08:26:24.638261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.552 [2024-11-28 08:26:24.650385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.552 [2024-11-28 08:26:24.650714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.552 [2024-11-28 08:26:24.650731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.552 [2024-11-28 08:26:24.650738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.552 [2024-11-28 08:26:24.650902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.552 [2024-11-28 08:26:24.651093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.552 [2024-11-28 08:26:24.651103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.552 [2024-11-28 08:26:24.651110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.552 [2024-11-28 08:26:24.651117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.552 [2024-11-28 08:26:24.663352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.552 [2024-11-28 08:26:24.663630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.552 [2024-11-28 08:26:24.663647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.552 [2024-11-28 08:26:24.663655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.552 [2024-11-28 08:26:24.663819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.552 [2024-11-28 08:26:24.663989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.552 [2024-11-28 08:26:24.663999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.552 [2024-11-28 08:26:24.664005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.552 [2024-11-28 08:26:24.664012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.552 [2024-11-28 08:26:24.676364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.552 [2024-11-28 08:26:24.676629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.552 [2024-11-28 08:26:24.676647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.552 [2024-11-28 08:26:24.676654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.552 [2024-11-28 08:26:24.676818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.552 [2024-11-28 08:26:24.676990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.552 [2024-11-28 08:26:24.677000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.552 [2024-11-28 08:26:24.677006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.552 [2024-11-28 08:26:24.677013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.552 [2024-11-28 08:26:24.689511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.552 [2024-11-28 08:26:24.689871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.552 [2024-11-28 08:26:24.689888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.552 [2024-11-28 08:26:24.689895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.552 [2024-11-28 08:26:24.690065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.552 [2024-11-28 08:26:24.690230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.552 [2024-11-28 08:26:24.690240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.552 [2024-11-28 08:26:24.690246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.552 [2024-11-28 08:26:24.690252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.552 [2024-11-28 08:26:24.702450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.552 [2024-11-28 08:26:24.702808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.552 [2024-11-28 08:26:24.702860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.553 [2024-11-28 08:26:24.702884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.553 [2024-11-28 08:26:24.703485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.553 [2024-11-28 08:26:24.703853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.553 [2024-11-28 08:26:24.703863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.553 [2024-11-28 08:26:24.703869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.553 [2024-11-28 08:26:24.703875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.553 [2024-11-28 08:26:24.715468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.553 [2024-11-28 08:26:24.715794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.553 [2024-11-28 08:26:24.715810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.553 [2024-11-28 08:26:24.715818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.553 [2024-11-28 08:26:24.715987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.553 [2024-11-28 08:26:24.716153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.553 [2024-11-28 08:26:24.716162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.553 [2024-11-28 08:26:24.716168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.553 [2024-11-28 08:26:24.716175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.553 [2024-11-28 08:26:24.728338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.553 [2024-11-28 08:26:24.728762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.553 [2024-11-28 08:26:24.728778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.553 [2024-11-28 08:26:24.728786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.553 [2024-11-28 08:26:24.728955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.553 [2024-11-28 08:26:24.729145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.553 [2024-11-28 08:26:24.729154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.553 [2024-11-28 08:26:24.729161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.553 [2024-11-28 08:26:24.729168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.553 [2024-11-28 08:26:24.741379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.553 [2024-11-28 08:26:24.741709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.553 [2024-11-28 08:26:24.741726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.553 [2024-11-28 08:26:24.741733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.553 [2024-11-28 08:26:24.741901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.553 [2024-11-28 08:26:24.742072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.553 [2024-11-28 08:26:24.742082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.553 [2024-11-28 08:26:24.742088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.553 [2024-11-28 08:26:24.742095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.553 [2024-11-28 08:26:24.754439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.553 [2024-11-28 08:26:24.754802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.553 [2024-11-28 08:26:24.754821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.553 [2024-11-28 08:26:24.754829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.553 [2024-11-28 08:26:24.755008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.553 [2024-11-28 08:26:24.755183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.553 [2024-11-28 08:26:24.755192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.553 [2024-11-28 08:26:24.755199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.553 [2024-11-28 08:26:24.755206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.553 [2024-11-28 08:26:24.767424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.553 [2024-11-28 08:26:24.767766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.553 [2024-11-28 08:26:24.767783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.553 [2024-11-28 08:26:24.767791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.553 [2024-11-28 08:26:24.767971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.553 [2024-11-28 08:26:24.768146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.553 [2024-11-28 08:26:24.768156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.553 [2024-11-28 08:26:24.768163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.553 [2024-11-28 08:26:24.768170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.553 [2024-11-28 08:26:24.780669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.553 [2024-11-28 08:26:24.781033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.553 [2024-11-28 08:26:24.781051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.553 [2024-11-28 08:26:24.781059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.553 [2024-11-28 08:26:24.781238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.553 [2024-11-28 08:26:24.781418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.553 [2024-11-28 08:26:24.781428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.553 [2024-11-28 08:26:24.781439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.553 [2024-11-28 08:26:24.781446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.553 [2024-11-28 08:26:24.793610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.553 [2024-11-28 08:26:24.793942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.553 [2024-11-28 08:26:24.793964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.553 [2024-11-28 08:26:24.793972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.553 [2024-11-28 08:26:24.794137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.553 [2024-11-28 08:26:24.794302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.553 [2024-11-28 08:26:24.794312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.553 [2024-11-28 08:26:24.794318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.553 [2024-11-28 08:26:24.794324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.553 [2024-11-28 08:26:24.806583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.553 [2024-11-28 08:26:24.806924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.553 [2024-11-28 08:26:24.806942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.553 [2024-11-28 08:26:24.806955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.553 [2024-11-28 08:26:24.807129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.553 [2024-11-28 08:26:24.807305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.553 [2024-11-28 08:26:24.807315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.553 [2024-11-28 08:26:24.807321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.553 [2024-11-28 08:26:24.807327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.814 [2024-11-28 08:26:24.819705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.814 [2024-11-28 08:26:24.820004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.814 [2024-11-28 08:26:24.820023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.815 [2024-11-28 08:26:24.820032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.815 [2024-11-28 08:26:24.820218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.815 [2024-11-28 08:26:24.820383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.815 [2024-11-28 08:26:24.820392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.815 [2024-11-28 08:26:24.820399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.815 [2024-11-28 08:26:24.820405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.815 [2024-11-28 08:26:24.832639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.815 [2024-11-28 08:26:24.832967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-11-28 08:26:24.832985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.815 [2024-11-28 08:26:24.832993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.815 [2024-11-28 08:26:24.833157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.815 [2024-11-28 08:26:24.833322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.815 [2024-11-28 08:26:24.833332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.815 [2024-11-28 08:26:24.833338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.815 [2024-11-28 08:26:24.833345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.815 [2024-11-28 08:26:24.845513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.815 [2024-11-28 08:26:24.845940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-11-28 08:26:24.845961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.815 [2024-11-28 08:26:24.845969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.815 [2024-11-28 08:26:24.846134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.815 [2024-11-28 08:26:24.846297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.815 [2024-11-28 08:26:24.846307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.815 [2024-11-28 08:26:24.846313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.815 [2024-11-28 08:26:24.846319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.815 [2024-11-28 08:26:24.858413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.815 [2024-11-28 08:26:24.858838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-11-28 08:26:24.858855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.815 [2024-11-28 08:26:24.858862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.815 [2024-11-28 08:26:24.859049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.815 [2024-11-28 08:26:24.859224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.815 [2024-11-28 08:26:24.859234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.815 [2024-11-28 08:26:24.859240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.815 [2024-11-28 08:26:24.859247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.815 [2024-11-28 08:26:24.871243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.815 [2024-11-28 08:26:24.871689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-11-28 08:26:24.871741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.815 [2024-11-28 08:26:24.871765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.815 [2024-11-28 08:26:24.872234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.815 [2024-11-28 08:26:24.872409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.815 [2024-11-28 08:26:24.872419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.815 [2024-11-28 08:26:24.872426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.815 [2024-11-28 08:26:24.872432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.815 [2024-11-28 08:26:24.884142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.815 [2024-11-28 08:26:24.884548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-11-28 08:26:24.884566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.815 [2024-11-28 08:26:24.884574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.815 [2024-11-28 08:26:24.884749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.815 [2024-11-28 08:26:24.884923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.815 [2024-11-28 08:26:24.884933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.815 [2024-11-28 08:26:24.884939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.815 [2024-11-28 08:26:24.884946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.815 6982.00 IOPS, 27.27 MiB/s [2024-11-28T07:26:25.084Z] [2024-11-28 08:26:24.898287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.815 [2024-11-28 08:26:24.898716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-11-28 08:26:24.898761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.815 [2024-11-28 08:26:24.898785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.815 [2024-11-28 08:26:24.899304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.815 [2024-11-28 08:26:24.899479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.815 [2024-11-28 08:26:24.899489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.815 [2024-11-28 08:26:24.899496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.815 [2024-11-28 08:26:24.899503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.815 [2024-11-28 08:26:24.911115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.815 [2024-11-28 08:26:24.911537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-11-28 08:26:24.911568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.815 [2024-11-28 08:26:24.911593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.815 [2024-11-28 08:26:24.912140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.815 [2024-11-28 08:26:24.912317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.815 [2024-11-28 08:26:24.912326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.815 [2024-11-28 08:26:24.912333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.815 [2024-11-28 08:26:24.912340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.815 [2024-11-28 08:26:24.924199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.815 [2024-11-28 08:26:24.924636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-11-28 08:26:24.924654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.815 [2024-11-28 08:26:24.924661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.815 [2024-11-28 08:26:24.924835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.815 [2024-11-28 08:26:24.925016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.815 [2024-11-28 08:26:24.925026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.815 [2024-11-28 08:26:24.925033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.815 [2024-11-28 08:26:24.925040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.815 [2024-11-28 08:26:24.937215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.815 [2024-11-28 08:26:24.937566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.815 [2024-11-28 08:26:24.937583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.815 [2024-11-28 08:26:24.937590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.815 [2024-11-28 08:26:24.937754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.815 [2024-11-28 08:26:24.937918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.815 [2024-11-28 08:26:24.937927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.815 [2024-11-28 08:26:24.937934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.815 [2024-11-28 08:26:24.937940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.816 [2024-11-28 08:26:24.950192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.816 [2024-11-28 08:26:24.950622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-11-28 08:26:24.950639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.816 [2024-11-28 08:26:24.950647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.816 [2024-11-28 08:26:24.950820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.816 [2024-11-28 08:26:24.951001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.816 [2024-11-28 08:26:24.951015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.816 [2024-11-28 08:26:24.951022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.816 [2024-11-28 08:26:24.951029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.816 [2024-11-28 08:26:24.963048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.816 [2024-11-28 08:26:24.963419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-11-28 08:26:24.963462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.816 [2024-11-28 08:26:24.963486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.816 [2024-11-28 08:26:24.964088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.816 [2024-11-28 08:26:24.964490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.816 [2024-11-28 08:26:24.964500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.816 [2024-11-28 08:26:24.964507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.816 [2024-11-28 08:26:24.964512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.816 [2024-11-28 08:26:24.976089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.816 [2024-11-28 08:26:24.976474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-11-28 08:26:24.976491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.816 [2024-11-28 08:26:24.976499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.816 [2024-11-28 08:26:24.976663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.816 [2024-11-28 08:26:24.976830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.816 [2024-11-28 08:26:24.976839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.816 [2024-11-28 08:26:24.976845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.816 [2024-11-28 08:26:24.976852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.816 [2024-11-28 08:26:24.989149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.816 [2024-11-28 08:26:24.989565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-11-28 08:26:24.989582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.816 [2024-11-28 08:26:24.989590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.816 [2024-11-28 08:26:24.989754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.816 [2024-11-28 08:26:24.989919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.816 [2024-11-28 08:26:24.989928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.816 [2024-11-28 08:26:24.989934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.816 [2024-11-28 08:26:24.989940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.816 [2024-11-28 08:26:25.002138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.816 [2024-11-28 08:26:25.002501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-11-28 08:26:25.002547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.816 [2024-11-28 08:26:25.002570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.816 [2024-11-28 08:26:25.003168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.816 [2024-11-28 08:26:25.003627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.816 [2024-11-28 08:26:25.003637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.816 [2024-11-28 08:26:25.003643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.816 [2024-11-28 08:26:25.003650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.816 [2024-11-28 08:26:25.015033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.816 [2024-11-28 08:26:25.015477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-11-28 08:26:25.015521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.816 [2024-11-28 08:26:25.015545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.816 [2024-11-28 08:26:25.016143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.816 [2024-11-28 08:26:25.016396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.816 [2024-11-28 08:26:25.016405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.816 [2024-11-28 08:26:25.016413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.816 [2024-11-28 08:26:25.016419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.816 [2024-11-28 08:26:25.028587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.816 [2024-11-28 08:26:25.028994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-11-28 08:26:25.029038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.816 [2024-11-28 08:26:25.029062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.816 [2024-11-28 08:26:25.029331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.816 [2024-11-28 08:26:25.029506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.816 [2024-11-28 08:26:25.029517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.816 [2024-11-28 08:26:25.029524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.816 [2024-11-28 08:26:25.029532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.816 [2024-11-28 08:26:25.041794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.816 [2024-11-28 08:26:25.042258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-11-28 08:26:25.042302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.816 [2024-11-28 08:26:25.042328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.816 [2024-11-28 08:26:25.042911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.816 [2024-11-28 08:26:25.043511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.816 [2024-11-28 08:26:25.043538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.816 [2024-11-28 08:26:25.043558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.816 [2024-11-28 08:26:25.043565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.816 [2024-11-28 08:26:25.054608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.816 [2024-11-28 08:26:25.055034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-11-28 08:26:25.055053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.816 [2024-11-28 08:26:25.055061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.816 [2024-11-28 08:26:25.055226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.816 [2024-11-28 08:26:25.055391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.816 [2024-11-28 08:26:25.055401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.816 [2024-11-28 08:26:25.055407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.816 [2024-11-28 08:26:25.055413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.816 [2024-11-28 08:26:25.067436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.816 [2024-11-28 08:26:25.067863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.816 [2024-11-28 08:26:25.067880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:42.816 [2024-11-28 08:26:25.067888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:42.816 [2024-11-28 08:26:25.068057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:42.816 [2024-11-28 08:26:25.068222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.816 [2024-11-28 08:26:25.068232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.816 [2024-11-28 08:26:25.068238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.817 [2024-11-28 08:26:25.068244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.079 [2024-11-28 08:26:25.080495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.079 [2024-11-28 08:26:25.080931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.079 [2024-11-28 08:26:25.080984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.079 [2024-11-28 08:26:25.081011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.079 [2024-11-28 08:26:25.081586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.079 [2024-11-28 08:26:25.081761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.079 [2024-11-28 08:26:25.081772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.079 [2024-11-28 08:26:25.081778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.079 [2024-11-28 08:26:25.081784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.079 [2024-11-28 08:26:25.093442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.079 [2024-11-28 08:26:25.093837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.079 [2024-11-28 08:26:25.093854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.079 [2024-11-28 08:26:25.093862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.079 [2024-11-28 08:26:25.094052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.079 [2024-11-28 08:26:25.094228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.079 [2024-11-28 08:26:25.094237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.079 [2024-11-28 08:26:25.094244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.079 [2024-11-28 08:26:25.094251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.079 [2024-11-28 08:26:25.106362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.079 [2024-11-28 08:26:25.106789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.079 [2024-11-28 08:26:25.106826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.079 [2024-11-28 08:26:25.106852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.079 [2024-11-28 08:26:25.107414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.079 [2024-11-28 08:26:25.107589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.079 [2024-11-28 08:26:25.107599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.079 [2024-11-28 08:26:25.107606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.079 [2024-11-28 08:26:25.107613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.079 [2024-11-28 08:26:25.120074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.079 [2024-11-28 08:26:25.120518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.079 [2024-11-28 08:26:25.120563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.079 [2024-11-28 08:26:25.120586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.079 [2024-11-28 08:26:25.120972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.080 [2024-11-28 08:26:25.121164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.080 [2024-11-28 08:26:25.121174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.080 [2024-11-28 08:26:25.121184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.080 [2024-11-28 08:26:25.121191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.080 [2024-11-28 08:26:25.133026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.080 [2024-11-28 08:26:25.133458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.080 [2024-11-28 08:26:25.133503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.080 [2024-11-28 08:26:25.133527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.080 [2024-11-28 08:26:25.133964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.080 [2024-11-28 08:26:25.134154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.080 [2024-11-28 08:26:25.134164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.080 [2024-11-28 08:26:25.134171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.080 [2024-11-28 08:26:25.134178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.080 [2024-11-28 08:26:25.145971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.080 [2024-11-28 08:26:25.146392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.080 [2024-11-28 08:26:25.146442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.080 [2024-11-28 08:26:25.146466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.080 [2024-11-28 08:26:25.146976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.080 [2024-11-28 08:26:25.147166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.080 [2024-11-28 08:26:25.147176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.080 [2024-11-28 08:26:25.147184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.080 [2024-11-28 08:26:25.147191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.080 [2024-11-28 08:26:25.158900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.080 [2024-11-28 08:26:25.159325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.080 [2024-11-28 08:26:25.159342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.080 [2024-11-28 08:26:25.159350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.080 [2024-11-28 08:26:25.159515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.080 [2024-11-28 08:26:25.159680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.080 [2024-11-28 08:26:25.159689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.080 [2024-11-28 08:26:25.159695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.080 [2024-11-28 08:26:25.159702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.080 [2024-11-28 08:26:25.171844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.080 [2024-11-28 08:26:25.172269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.080 [2024-11-28 08:26:25.172286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.080 [2024-11-28 08:26:25.172294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.080 [2024-11-28 08:26:25.172458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.080 [2024-11-28 08:26:25.172622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.080 [2024-11-28 08:26:25.172632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.080 [2024-11-28 08:26:25.172638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.080 [2024-11-28 08:26:25.172644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.080 [2024-11-28 08:26:25.184787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.080 [2024-11-28 08:26:25.185226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.080 [2024-11-28 08:26:25.185244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.080 [2024-11-28 08:26:25.185251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.080 [2024-11-28 08:26:25.185416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.080 [2024-11-28 08:26:25.185580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.080 [2024-11-28 08:26:25.185589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.080 [2024-11-28 08:26:25.185596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.080 [2024-11-28 08:26:25.185602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.080 [2024-11-28 08:26:25.197745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.080 [2024-11-28 08:26:25.198174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.080 [2024-11-28 08:26:25.198192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.080 [2024-11-28 08:26:25.198200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.080 [2024-11-28 08:26:25.198375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.080 [2024-11-28 08:26:25.198549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.080 [2024-11-28 08:26:25.198559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.080 [2024-11-28 08:26:25.198566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.080 [2024-11-28 08:26:25.198573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.080 [2024-11-28 08:26:25.210622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.080 [2024-11-28 08:26:25.211052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.080 [2024-11-28 08:26:25.211107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.080 [2024-11-28 08:26:25.211131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.080 [2024-11-28 08:26:25.211715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.080 [2024-11-28 08:26:25.211908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.080 [2024-11-28 08:26:25.211916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.080 [2024-11-28 08:26:25.211922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.080 [2024-11-28 08:26:25.211928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.080 [2024-11-28 08:26:25.223600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.080 [2024-11-28 08:26:25.224033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.080 [2024-11-28 08:26:25.224079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.080 [2024-11-28 08:26:25.224103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.080 [2024-11-28 08:26:25.224685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.080 [2024-11-28 08:26:25.224877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.080 [2024-11-28 08:26:25.224885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.080 [2024-11-28 08:26:25.224892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.080 [2024-11-28 08:26:25.224898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.080 [2024-11-28 08:26:25.236475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.080 [2024-11-28 08:26:25.236902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.080 [2024-11-28 08:26:25.236919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.080 [2024-11-28 08:26:25.236926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.080 [2024-11-28 08:26:25.237120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.080 [2024-11-28 08:26:25.237296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.080 [2024-11-28 08:26:25.237306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.080 [2024-11-28 08:26:25.237312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.080 [2024-11-28 08:26:25.237319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.080 [2024-11-28 08:26:25.249320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.080 [2024-11-28 08:26:25.249741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.080 [2024-11-28 08:26:25.249791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.081 [2024-11-28 08:26:25.249815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.081 [2024-11-28 08:26:25.250423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.081 [2024-11-28 08:26:25.251022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.081 [2024-11-28 08:26:25.251048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.081 [2024-11-28 08:26:25.251071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.081 [2024-11-28 08:26:25.251077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.081 [2024-11-28 08:26:25.262268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.081 [2024-11-28 08:26:25.262701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.081 [2024-11-28 08:26:25.262745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.081 [2024-11-28 08:26:25.262768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.081 [2024-11-28 08:26:25.263251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.081 [2024-11-28 08:26:25.263427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.081 [2024-11-28 08:26:25.263438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.081 [2024-11-28 08:26:25.263444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.081 [2024-11-28 08:26:25.263452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.081 [2024-11-28 08:26:25.275134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.081 [2024-11-28 08:26:25.275553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.081 [2024-11-28 08:26:25.275570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.081 [2024-11-28 08:26:25.275578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.081 [2024-11-28 08:26:25.275742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.081 [2024-11-28 08:26:25.275907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.081 [2024-11-28 08:26:25.275915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.081 [2024-11-28 08:26:25.275922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.081 [2024-11-28 08:26:25.275928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.081 [2024-11-28 08:26:25.288096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.081 [2024-11-28 08:26:25.288521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.081 [2024-11-28 08:26:25.288539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.081 [2024-11-28 08:26:25.288547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.081 [2024-11-28 08:26:25.288713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.081 [2024-11-28 08:26:25.288877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.081 [2024-11-28 08:26:25.288886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.081 [2024-11-28 08:26:25.288898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.081 [2024-11-28 08:26:25.288906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.081 [2024-11-28 08:26:25.301217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.081 [2024-11-28 08:26:25.301652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.081 [2024-11-28 08:26:25.301694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.081 [2024-11-28 08:26:25.301718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.081 [2024-11-28 08:26:25.302281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.081 [2024-11-28 08:26:25.302458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.081 [2024-11-28 08:26:25.302468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.081 [2024-11-28 08:26:25.302474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.081 [2024-11-28 08:26:25.302480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.081 [2024-11-28 08:26:25.314252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.081 [2024-11-28 08:26:25.314679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.081 [2024-11-28 08:26:25.314723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.081 [2024-11-28 08:26:25.314747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.081 [2024-11-28 08:26:25.315262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.081 [2024-11-28 08:26:25.315437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.081 [2024-11-28 08:26:25.315446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.081 [2024-11-28 08:26:25.315452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.081 [2024-11-28 08:26:25.315459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.081 [2024-11-28 08:26:25.327069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.081 [2024-11-28 08:26:25.327488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.081 [2024-11-28 08:26:25.327538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.081 [2024-11-28 08:26:25.327561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.081 [2024-11-28 08:26:25.328120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.081 [2024-11-28 08:26:25.328285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.081 [2024-11-28 08:26:25.328293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.081 [2024-11-28 08:26:25.328300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.081 [2024-11-28 08:26:25.328306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.081 [2024-11-28 08:26:25.340080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.081 [2024-11-28 08:26:25.340518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.081 [2024-11-28 08:26:25.340536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.081 [2024-11-28 08:26:25.340545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.081 [2024-11-28 08:26:25.340724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.081 [2024-11-28 08:26:25.340903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.081 [2024-11-28 08:26:25.340913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.081 [2024-11-28 08:26:25.340920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.081 [2024-11-28 08:26:25.340927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.342 [2024-11-28 08:26:25.353190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.342 [2024-11-28 08:26:25.353625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.342 [2024-11-28 08:26:25.353670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.342 [2024-11-28 08:26:25.353693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.342 [2024-11-28 08:26:25.354153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.342 [2024-11-28 08:26:25.354327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.342 [2024-11-28 08:26:25.354336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.342 [2024-11-28 08:26:25.354342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.342 [2024-11-28 08:26:25.354348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.342 [2024-11-28 08:26:25.366054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.342 [2024-11-28 08:26:25.366480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.343 [2024-11-28 08:26:25.366498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.343 [2024-11-28 08:26:25.366506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.343 [2024-11-28 08:26:25.366670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.343 [2024-11-28 08:26:25.366835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.343 [2024-11-28 08:26:25.366844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.343 [2024-11-28 08:26:25.366850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.343 [2024-11-28 08:26:25.366856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.343 [2024-11-28 08:26:25.378901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.343 [2024-11-28 08:26:25.379303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.343 [2024-11-28 08:26:25.379324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.343 [2024-11-28 08:26:25.379332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.343 [2024-11-28 08:26:25.379496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.343 [2024-11-28 08:26:25.379686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.343 [2024-11-28 08:26:25.379697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.343 [2024-11-28 08:26:25.379703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.343 [2024-11-28 08:26:25.379710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.343 [2024-11-28 08:26:25.391850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.343 [2024-11-28 08:26:25.392258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.343 [2024-11-28 08:26:25.392275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.343 [2024-11-28 08:26:25.392283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.343 [2024-11-28 08:26:25.392447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.343 [2024-11-28 08:26:25.392611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.343 [2024-11-28 08:26:25.392621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.343 [2024-11-28 08:26:25.392627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.343 [2024-11-28 08:26:25.392634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.343 [2024-11-28 08:26:25.404692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.343 [2024-11-28 08:26:25.405116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.343 [2024-11-28 08:26:25.405134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.343 [2024-11-28 08:26:25.405142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.343 [2024-11-28 08:26:25.405307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.343 [2024-11-28 08:26:25.405473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.343 [2024-11-28 08:26:25.405482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.343 [2024-11-28 08:26:25.405489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.343 [2024-11-28 08:26:25.405496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.343 [2024-11-28 08:26:25.417741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.343 [2024-11-28 08:26:25.418180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.343 [2024-11-28 08:26:25.418198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.343 [2024-11-28 08:26:25.418206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.343 [2024-11-28 08:26:25.418384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.343 [2024-11-28 08:26:25.418561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.343 [2024-11-28 08:26:25.418570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.343 [2024-11-28 08:26:25.418577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.343 [2024-11-28 08:26:25.418584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.343 [2024-11-28 08:26:25.430845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.343 [2024-11-28 08:26:25.431281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.343 [2024-11-28 08:26:25.431325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.343 [2024-11-28 08:26:25.431349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.343 [2024-11-28 08:26:25.431933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.343 [2024-11-28 08:26:25.432527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.343 [2024-11-28 08:26:25.432538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.343 [2024-11-28 08:26:25.432544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.343 [2024-11-28 08:26:25.432551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.343 [2024-11-28 08:26:25.443989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.343 [2024-11-28 08:26:25.444407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.343 [2024-11-28 08:26:25.444425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.343 [2024-11-28 08:26:25.444433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.343 [2024-11-28 08:26:25.444613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.343 [2024-11-28 08:26:25.444793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.343 [2024-11-28 08:26:25.444804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.343 [2024-11-28 08:26:25.444810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.343 [2024-11-28 08:26:25.444817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.343 [2024-11-28 08:26:25.457045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.343 [2024-11-28 08:26:25.457439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.343 [2024-11-28 08:26:25.457482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.343 [2024-11-28 08:26:25.457505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.343 [2024-11-28 08:26:25.458102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.343 [2024-11-28 08:26:25.458603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.343 [2024-11-28 08:26:25.458612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.343 [2024-11-28 08:26:25.458623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.343 [2024-11-28 08:26:25.458631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.343 [2024-11-28 08:26:25.470094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.343 [2024-11-28 08:26:25.470507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.343 [2024-11-28 08:26:25.470524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.343 [2024-11-28 08:26:25.470532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.343 [2024-11-28 08:26:25.470706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.343 [2024-11-28 08:26:25.470881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.343 [2024-11-28 08:26:25.470890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.343 [2024-11-28 08:26:25.470898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.343 [2024-11-28 08:26:25.470905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.343 [2024-11-28 08:26:25.483056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.343 [2024-11-28 08:26:25.483483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.343 [2024-11-28 08:26:25.483499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.343 [2024-11-28 08:26:25.483506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.343 [2024-11-28 08:26:25.483669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.343 [2024-11-28 08:26:25.483835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.343 [2024-11-28 08:26:25.483844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.343 [2024-11-28 08:26:25.483850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.343 [2024-11-28 08:26:25.483857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.343 [2024-11-28 08:26:25.495896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.344 [2024-11-28 08:26:25.496346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.344 [2024-11-28 08:26:25.496392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.344 [2024-11-28 08:26:25.496416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.344 [2024-11-28 08:26:25.497013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.344 [2024-11-28 08:26:25.497600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.344 [2024-11-28 08:26:25.497625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.344 [2024-11-28 08:26:25.497646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.344 [2024-11-28 08:26:25.497664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.344 [2024-11-28 08:26:25.508787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.344 [2024-11-28 08:26:25.509217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.344 [2024-11-28 08:26:25.509236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.344 [2024-11-28 08:26:25.509244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.344 [2024-11-28 08:26:25.509408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.344 [2024-11-28 08:26:25.509573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.344 [2024-11-28 08:26:25.509582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.344 [2024-11-28 08:26:25.509589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.344 [2024-11-28 08:26:25.509595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.344 [2024-11-28 08:26:25.521802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.344 [2024-11-28 08:26:25.522235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.344 [2024-11-28 08:26:25.522280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.344 [2024-11-28 08:26:25.522304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.344 [2024-11-28 08:26:25.522751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.344 [2024-11-28 08:26:25.522943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.344 [2024-11-28 08:26:25.522956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.344 [2024-11-28 08:26:25.522963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.344 [2024-11-28 08:26:25.522971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.344 [2024-11-28 08:26:25.534826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.344 [2024-11-28 08:26:25.535180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.344 [2024-11-28 08:26:25.535198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.344 [2024-11-28 08:26:25.535205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.344 [2024-11-28 08:26:25.535369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.344 [2024-11-28 08:26:25.535534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.344 [2024-11-28 08:26:25.535543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.344 [2024-11-28 08:26:25.535549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.344 [2024-11-28 08:26:25.535556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.344 [2024-11-28 08:26:25.547758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.344 [2024-11-28 08:26:25.548203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.344 [2024-11-28 08:26:25.548256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.344 [2024-11-28 08:26:25.548281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.344 [2024-11-28 08:26:25.548803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.344 [2024-11-28 08:26:25.548985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.344 [2024-11-28 08:26:25.548995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.344 [2024-11-28 08:26:25.549003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.344 [2024-11-28 08:26:25.549010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.344 [2024-11-28 08:26:25.560968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.344 [2024-11-28 08:26:25.561411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.344 [2024-11-28 08:26:25.561454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.344 [2024-11-28 08:26:25.561477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.344 [2024-11-28 08:26:25.561982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.344 [2024-11-28 08:26:25.562239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.344 [2024-11-28 08:26:25.562252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.344 [2024-11-28 08:26:25.562263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.344 [2024-11-28 08:26:25.562272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.344 [2024-11-28 08:26:25.574527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.344 [2024-11-28 08:26:25.574977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.344 [2024-11-28 08:26:25.575023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.344 [2024-11-28 08:26:25.575046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.344 [2024-11-28 08:26:25.575630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.344 [2024-11-28 08:26:25.576031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.344 [2024-11-28 08:26:25.576041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.344 [2024-11-28 08:26:25.576048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.344 [2024-11-28 08:26:25.576054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.344 [2024-11-28 08:26:25.587499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.344 [2024-11-28 08:26:25.587939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.344 [2024-11-28 08:26:25.587996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.344 [2024-11-28 08:26:25.588020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.344 [2024-11-28 08:26:25.588581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.344 [2024-11-28 08:26:25.588748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.344 [2024-11-28 08:26:25.588757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.344 [2024-11-28 08:26:25.588763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.344 [2024-11-28 08:26:25.588770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.344 [2024-11-28 08:26:25.600486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.344 [2024-11-28 08:26:25.600842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.344 [2024-11-28 08:26:25.600859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.344 [2024-11-28 08:26:25.600867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.344 [2024-11-28 08:26:25.601057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.344 [2024-11-28 08:26:25.601231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.344 [2024-11-28 08:26:25.601241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.344 [2024-11-28 08:26:25.601247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.344 [2024-11-28 08:26:25.601254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.606 [2024-11-28 08:26:25.613603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.606 [2024-11-28 08:26:25.614042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.606 [2024-11-28 08:26:25.614061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.606 [2024-11-28 08:26:25.614069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.606 [2024-11-28 08:26:25.614248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.606 [2024-11-28 08:26:25.614429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.606 [2024-11-28 08:26:25.614439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.606 [2024-11-28 08:26:25.614445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.606 [2024-11-28 08:26:25.614452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.606 [2024-11-28 08:26:25.626568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.606 [2024-11-28 08:26:25.626992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.606 [2024-11-28 08:26:25.627010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.606 [2024-11-28 08:26:25.627018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.606 [2024-11-28 08:26:25.627183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.606 [2024-11-28 08:26:25.627347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.606 [2024-11-28 08:26:25.627357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.606 [2024-11-28 08:26:25.627368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.606 [2024-11-28 08:26:25.627374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.606 [2024-11-28 08:26:25.639513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.606 [2024-11-28 08:26:25.639861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.606 [2024-11-28 08:26:25.639878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.606 [2024-11-28 08:26:25.639886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.606 [2024-11-28 08:26:25.640078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.606 [2024-11-28 08:26:25.640251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.606 [2024-11-28 08:26:25.640261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.606 [2024-11-28 08:26:25.640268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.606 [2024-11-28 08:26:25.640274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.606 [2024-11-28 08:26:25.652452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.606 [2024-11-28 08:26:25.652882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.606 [2024-11-28 08:26:25.652926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.606 [2024-11-28 08:26:25.652962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.606 [2024-11-28 08:26:25.653547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.606 [2024-11-28 08:26:25.654140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.606 [2024-11-28 08:26:25.654149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.606 [2024-11-28 08:26:25.654157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.606 [2024-11-28 08:26:25.654164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.606 [2024-11-28 08:26:25.665402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.606 [2024-11-28 08:26:25.665760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.606 [2024-11-28 08:26:25.665777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.606 [2024-11-28 08:26:25.665784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.606 [2024-11-28 08:26:25.665955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.606 [2024-11-28 08:26:25.666143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.606 [2024-11-28 08:26:25.666152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.606 [2024-11-28 08:26:25.666159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.606 [2024-11-28 08:26:25.666165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.606 [2024-11-28 08:26:25.678343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.606 [2024-11-28 08:26:25.678770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.606 [2024-11-28 08:26:25.678787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.606 [2024-11-28 08:26:25.678794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.606 [2024-11-28 08:26:25.678964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.606 [2024-11-28 08:26:25.679153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.606 [2024-11-28 08:26:25.679163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.606 [2024-11-28 08:26:25.679169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.606 [2024-11-28 08:26:25.679176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.606 [2024-11-28 08:26:25.691271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.606 [2024-11-28 08:26:25.691690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.606 [2024-11-28 08:26:25.691707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.606 [2024-11-28 08:26:25.691714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.606 [2024-11-28 08:26:25.691878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.606 [2024-11-28 08:26:25.692067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.606 [2024-11-28 08:26:25.692078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.606 [2024-11-28 08:26:25.692084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.606 [2024-11-28 08:26:25.692091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.606 [2024-11-28 08:26:25.704295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.606 [2024-11-28 08:26:25.704728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.606 [2024-11-28 08:26:25.704771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.606 [2024-11-28 08:26:25.704795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.606 [2024-11-28 08:26:25.705231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.606 [2024-11-28 08:26:25.705407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.606 [2024-11-28 08:26:25.705415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.606 [2024-11-28 08:26:25.705422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.606 [2024-11-28 08:26:25.705428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.606 [2024-11-28 08:26:25.717210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.606 [2024-11-28 08:26:25.717639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.606 [2024-11-28 08:26:25.717660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.606 [2024-11-28 08:26:25.717668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.606 [2024-11-28 08:26:25.717833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.607 [2024-11-28 08:26:25.718023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.607 [2024-11-28 08:26:25.718034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.607 [2024-11-28 08:26:25.718040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.607 [2024-11-28 08:26:25.718047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.607 [2024-11-28 08:26:25.730149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.607 [2024-11-28 08:26:25.730572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.607 [2024-11-28 08:26:25.730616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.607 [2024-11-28 08:26:25.730641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.607 [2024-11-28 08:26:25.731161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.607 [2024-11-28 08:26:25.731336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.607 [2024-11-28 08:26:25.731346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.607 [2024-11-28 08:26:25.731353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.607 [2024-11-28 08:26:25.731360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.607 [2024-11-28 08:26:25.743100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.607 [2024-11-28 08:26:25.743531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.607 [2024-11-28 08:26:25.743575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.607 [2024-11-28 08:26:25.743599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.607 [2024-11-28 08:26:25.744108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.607 [2024-11-28 08:26:25.744284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.607 [2024-11-28 08:26:25.744293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.607 [2024-11-28 08:26:25.744300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.607 [2024-11-28 08:26:25.744306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.607 [2024-11-28 08:26:25.756624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.607 [2024-11-28 08:26:25.757003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.607 [2024-11-28 08:26:25.757048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.607 [2024-11-28 08:26:25.757071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.607 [2024-11-28 08:26:25.757495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.607 [2024-11-28 08:26:25.757665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.607 [2024-11-28 08:26:25.757673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.607 [2024-11-28 08:26:25.757679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.607 [2024-11-28 08:26:25.757685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.607 [2024-11-28 08:26:25.769501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.607 [2024-11-28 08:26:25.769850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.607 [2024-11-28 08:26:25.769893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.607 [2024-11-28 08:26:25.769917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.607 [2024-11-28 08:26:25.770350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.607 [2024-11-28 08:26:25.770525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.607 [2024-11-28 08:26:25.770535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.607 [2024-11-28 08:26:25.770542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.607 [2024-11-28 08:26:25.770548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.607 [2024-11-28 08:26:25.782332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.607 [2024-11-28 08:26:25.782752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.607 [2024-11-28 08:26:25.782804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.607 [2024-11-28 08:26:25.782827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.607 [2024-11-28 08:26:25.783386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.607 [2024-11-28 08:26:25.783563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.607 [2024-11-28 08:26:25.783573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.607 [2024-11-28 08:26:25.783580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.607 [2024-11-28 08:26:25.783586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.607 [2024-11-28 08:26:25.795273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.607 [2024-11-28 08:26:25.795703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.607 [2024-11-28 08:26:25.795747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.607 [2024-11-28 08:26:25.795770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.607 [2024-11-28 08:26:25.796226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.607 [2024-11-28 08:26:25.796392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.607 [2024-11-28 08:26:25.796402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.607 [2024-11-28 08:26:25.796412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.607 [2024-11-28 08:26:25.796419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.607 [2024-11-28 08:26:25.808209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.607 [2024-11-28 08:26:25.808673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.607 [2024-11-28 08:26:25.808717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.607 [2024-11-28 08:26:25.808740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.607 [2024-11-28 08:26:25.809269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.607 [2024-11-28 08:26:25.809446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.607 [2024-11-28 08:26:25.809456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.607 [2024-11-28 08:26:25.809462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.608 [2024-11-28 08:26:25.809468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.608 [2024-11-28 08:26:25.821426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.608 [2024-11-28 08:26:25.821838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.608 [2024-11-28 08:26:25.821855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.608 [2024-11-28 08:26:25.821864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.608 [2024-11-28 08:26:25.822049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.608 [2024-11-28 08:26:25.822228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.608 [2024-11-28 08:26:25.822238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.608 [2024-11-28 08:26:25.822246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.608 [2024-11-28 08:26:25.822252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.608 [2024-11-28 08:26:25.834628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.608 [2024-11-28 08:26:25.835065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.608 [2024-11-28 08:26:25.835083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.608 [2024-11-28 08:26:25.835092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.608 [2024-11-28 08:26:25.835271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.608 [2024-11-28 08:26:25.835451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.608 [2024-11-28 08:26:25.835461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.608 [2024-11-28 08:26:25.835468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.608 [2024-11-28 08:26:25.835474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.608 [2024-11-28 08:26:25.847658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.608 [2024-11-28 08:26:25.848045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.608 [2024-11-28 08:26:25.848064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.608 [2024-11-28 08:26:25.848072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.608 [2024-11-28 08:26:25.848246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.608 [2024-11-28 08:26:25.848420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.608 [2024-11-28 08:26:25.848430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.608 [2024-11-28 08:26:25.848436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.608 [2024-11-28 08:26:25.848443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.608 [2024-11-28 08:26:25.860766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.608 [2024-11-28 08:26:25.861087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.608 [2024-11-28 08:26:25.861105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.608 [2024-11-28 08:26:25.861112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.608 [2024-11-28 08:26:25.861286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.608 [2024-11-28 08:26:25.861463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.608 [2024-11-28 08:26:25.861473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.608 [2024-11-28 08:26:25.861480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.608 [2024-11-28 08:26:25.861486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.868 [2024-11-28 08:26:25.873901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.868 [2024-11-28 08:26:25.874351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.868 [2024-11-28 08:26:25.874396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.868 [2024-11-28 08:26:25.874419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.868 [2024-11-28 08:26:25.875019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.868 [2024-11-28 08:26:25.875600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.868 [2024-11-28 08:26:25.875613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.868 [2024-11-28 08:26:25.875623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.868 [2024-11-28 08:26:25.875633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.868 [2024-11-28 08:26:25.887761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.868 [2024-11-28 08:26:25.888179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.868 [2024-11-28 08:26:25.888200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.868 [2024-11-28 08:26:25.888209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.868 [2024-11-28 08:26:25.888383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.868 [2024-11-28 08:26:25.888558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.868 [2024-11-28 08:26:25.888569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.868 [2024-11-28 08:26:25.888575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.868 [2024-11-28 08:26:25.888581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.868 5585.60 IOPS, 21.82 MiB/s [2024-11-28T07:26:26.137Z] [2024-11-28 08:26:25.902027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.868 [2024-11-28 08:26:25.902380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.869 [2024-11-28 08:26:25.902398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.869 [2024-11-28 08:26:25.902406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.869 [2024-11-28 08:26:25.902579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.869 [2024-11-28 08:26:25.902754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.869 [2024-11-28 08:26:25.902764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.869 [2024-11-28 08:26:25.902770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.869 [2024-11-28 08:26:25.902777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.869 [2024-11-28 08:26:25.914957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.869 [2024-11-28 08:26:25.915336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.869 [2024-11-28 08:26:25.915353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.869 [2024-11-28 08:26:25.915361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.869 [2024-11-28 08:26:25.915525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.869 [2024-11-28 08:26:25.915690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.869 [2024-11-28 08:26:25.915700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.869 [2024-11-28 08:26:25.915706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.869 [2024-11-28 08:26:25.915712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.869 [2024-11-28 08:26:25.927987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.869 [2024-11-28 08:26:25.928273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.869 [2024-11-28 08:26:25.928290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.869 [2024-11-28 08:26:25.928298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.869 [2024-11-28 08:26:25.928468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.869 [2024-11-28 08:26:25.928633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.869 [2024-11-28 08:26:25.928643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.869 [2024-11-28 08:26:25.928649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.869 [2024-11-28 08:26:25.928655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.869 [2024-11-28 08:26:25.940982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.869 [2024-11-28 08:26:25.941310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.869 [2024-11-28 08:26:25.941327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.869 [2024-11-28 08:26:25.941335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.869 [2024-11-28 08:26:25.941498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.869 [2024-11-28 08:26:25.941663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.869 [2024-11-28 08:26:25.941672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.869 [2024-11-28 08:26:25.941678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.869 [2024-11-28 08:26:25.941685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.869 [2024-11-28 08:26:25.954009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.869 [2024-11-28 08:26:25.954338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.869 [2024-11-28 08:26:25.954380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.869 [2024-11-28 08:26:25.954405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.869 [2024-11-28 08:26:25.954934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.869 [2024-11-28 08:26:25.955131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.869 [2024-11-28 08:26:25.955142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.869 [2024-11-28 08:26:25.955149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.869 [2024-11-28 08:26:25.955155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.869 [2024-11-28 08:26:25.967004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.869 [2024-11-28 08:26:25.967292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.869 [2024-11-28 08:26:25.967310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.869 [2024-11-28 08:26:25.967317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.869 [2024-11-28 08:26:25.967491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.869 [2024-11-28 08:26:25.967666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.869 [2024-11-28 08:26:25.967679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.869 [2024-11-28 08:26:25.967686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.869 [2024-11-28 08:26:25.967693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.869 [2024-11-28 08:26:25.980127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.869 [2024-11-28 08:26:25.980486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.869 [2024-11-28 08:26:25.980503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.869 [2024-11-28 08:26:25.980512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.869 [2024-11-28 08:26:25.980686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.869 [2024-11-28 08:26:25.980861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.869 [2024-11-28 08:26:25.980871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.869 [2024-11-28 08:26:25.980879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.869 [2024-11-28 08:26:25.980885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.869 [2024-11-28 08:26:25.993318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.869 [2024-11-28 08:26:25.993685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.869 [2024-11-28 08:26:25.993703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.869 [2024-11-28 08:26:25.993712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.869 [2024-11-28 08:26:25.993892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.869 [2024-11-28 08:26:25.994081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.869 [2024-11-28 08:26:25.994094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.869 [2024-11-28 08:26:25.994101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.869 [2024-11-28 08:26:25.994108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.869 [2024-11-28 08:26:26.006413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.869 [2024-11-28 08:26:26.006782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.869 [2024-11-28 08:26:26.006800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.869 [2024-11-28 08:26:26.006808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.869 [2024-11-28 08:26:26.006995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.869 [2024-11-28 08:26:26.007176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.869 [2024-11-28 08:26:26.007186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.869 [2024-11-28 08:26:26.007194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.869 [2024-11-28 08:26:26.007200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.869 [2024-11-28 08:26:26.019498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.869 [2024-11-28 08:26:26.019911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.869 [2024-11-28 08:26:26.019928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.869 [2024-11-28 08:26:26.019936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.869 [2024-11-28 08:26:26.020123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.869 [2024-11-28 08:26:26.020305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.870 [2024-11-28 08:26:26.020315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.870 [2024-11-28 08:26:26.020322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.870 [2024-11-28 08:26:26.020329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.870 [2024-11-28 08:26:26.032624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.870 [2024-11-28 08:26:26.033083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.870 [2024-11-28 08:26:26.033102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.870 [2024-11-28 08:26:26.033110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.870 [2024-11-28 08:26:26.033302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.870 [2024-11-28 08:26:26.033482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.870 [2024-11-28 08:26:26.033492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.870 [2024-11-28 08:26:26.033499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.870 [2024-11-28 08:26:26.033506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.870 [2024-11-28 08:26:26.045847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.870 [2024-11-28 08:26:26.046285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.870 [2024-11-28 08:26:26.046303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.870 [2024-11-28 08:26:26.046311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.870 [2024-11-28 08:26:26.046488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.870 [2024-11-28 08:26:26.046667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.870 [2024-11-28 08:26:26.046677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.870 [2024-11-28 08:26:26.046684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.870 [2024-11-28 08:26:26.046690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.870 [2024-11-28 08:26:26.058970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.870 [2024-11-28 08:26:26.059346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.870 [2024-11-28 08:26:26.059368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.870 [2024-11-28 08:26:26.059376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.870 [2024-11-28 08:26:26.059557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.870 [2024-11-28 08:26:26.059738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.870 [2024-11-28 08:26:26.059748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.870 [2024-11-28 08:26:26.059757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.870 [2024-11-28 08:26:26.059765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.870 [2024-11-28 08:26:26.072054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.870 [2024-11-28 08:26:26.072496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.870 [2024-11-28 08:26:26.072515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.870 [2024-11-28 08:26:26.072523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.870 [2024-11-28 08:26:26.072702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.870 [2024-11-28 08:26:26.072881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.870 [2024-11-28 08:26:26.072891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.870 [2024-11-28 08:26:26.072898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.870 [2024-11-28 08:26:26.072905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.870 [2024-11-28 08:26:26.085186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.870 [2024-11-28 08:26:26.085608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.870 [2024-11-28 08:26:26.085626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.870 [2024-11-28 08:26:26.085634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.870 [2024-11-28 08:26:26.085812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.870 [2024-11-28 08:26:26.085998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.870 [2024-11-28 08:26:26.086009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.870 [2024-11-28 08:26:26.086015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.870 [2024-11-28 08:26:26.086022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.870 [2024-11-28 08:26:26.098364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.870 [2024-11-28 08:26:26.098737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.870 [2024-11-28 08:26:26.098755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.870 [2024-11-28 08:26:26.098764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.870 [2024-11-28 08:26:26.098952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.870 [2024-11-28 08:26:26.099133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.870 [2024-11-28 08:26:26.099143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.870 [2024-11-28 08:26:26.099150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.870 [2024-11-28 08:26:26.099157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.870 [2024-11-28 08:26:26.111442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.870 [2024-11-28 08:26:26.111887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.870 [2024-11-28 08:26:26.111905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.870 [2024-11-28 08:26:26.111913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.870 [2024-11-28 08:26:26.112099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.870 [2024-11-28 08:26:26.112280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.870 [2024-11-28 08:26:26.112290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.870 [2024-11-28 08:26:26.112297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.870 [2024-11-28 08:26:26.112303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:43.870 [2024-11-28 08:26:26.124578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:43.870 [2024-11-28 08:26:26.125020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.870 [2024-11-28 08:26:26.125038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:43.870 [2024-11-28 08:26:26.125046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:43.870 [2024-11-28 08:26:26.125225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:43.870 [2024-11-28 08:26:26.125406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:43.870 [2024-11-28 08:26:26.125415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:43.870 [2024-11-28 08:26:26.125423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:43.870 [2024-11-28 08:26:26.125429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.131 [2024-11-28 08:26:26.137730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.131 [2024-11-28 08:26:26.138091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.131 [2024-11-28 08:26:26.138109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.132 [2024-11-28 08:26:26.138117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.132 [2024-11-28 08:26:26.138296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.132 [2024-11-28 08:26:26.138475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.132 [2024-11-28 08:26:26.138488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.132 [2024-11-28 08:26:26.138495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.132 [2024-11-28 08:26:26.138502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.132 [2024-11-28 08:26:26.150944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.132 [2024-11-28 08:26:26.151369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.132 [2024-11-28 08:26:26.151386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.132 [2024-11-28 08:26:26.151393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.132 [2024-11-28 08:26:26.151572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.132 [2024-11-28 08:26:26.151750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.132 [2024-11-28 08:26:26.151761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.132 [2024-11-28 08:26:26.151767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.132 [2024-11-28 08:26:26.151774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.132 [2024-11-28 08:26:26.164113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.132 [2024-11-28 08:26:26.164551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.132 [2024-11-28 08:26:26.164569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.132 [2024-11-28 08:26:26.164577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.132 [2024-11-28 08:26:26.164756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.132 [2024-11-28 08:26:26.164936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.132 [2024-11-28 08:26:26.164945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.132 [2024-11-28 08:26:26.164960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.132 [2024-11-28 08:26:26.164967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.132 [2024-11-28 08:26:26.177245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.132 [2024-11-28 08:26:26.177685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.132 [2024-11-28 08:26:26.177702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.132 [2024-11-28 08:26:26.177710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.132 [2024-11-28 08:26:26.177888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.132 [2024-11-28 08:26:26.178074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.132 [2024-11-28 08:26:26.178084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.132 [2024-11-28 08:26:26.178091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.132 [2024-11-28 08:26:26.178097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.132 [2024-11-28 08:26:26.190375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.132 [2024-11-28 08:26:26.190814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.132 [2024-11-28 08:26:26.190833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.132 [2024-11-28 08:26:26.190841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.132 [2024-11-28 08:26:26.191027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.132 [2024-11-28 08:26:26.191207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.132 [2024-11-28 08:26:26.191217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.132 [2024-11-28 08:26:26.191224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.132 [2024-11-28 08:26:26.191230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.132 [2024-11-28 08:26:26.203507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.132 [2024-11-28 08:26:26.203877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.132 [2024-11-28 08:26:26.203895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.132 [2024-11-28 08:26:26.203903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.132 [2024-11-28 08:26:26.204085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.132 [2024-11-28 08:26:26.204267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.132 [2024-11-28 08:26:26.204277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.132 [2024-11-28 08:26:26.204284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.132 [2024-11-28 08:26:26.204290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.132 [2024-11-28 08:26:26.216736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.132 [2024-11-28 08:26:26.217173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.132 [2024-11-28 08:26:26.217191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.132 [2024-11-28 08:26:26.217199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.132 [2024-11-28 08:26:26.217378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.132 [2024-11-28 08:26:26.217559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.132 [2024-11-28 08:26:26.217569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.132 [2024-11-28 08:26:26.217575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.132 [2024-11-28 08:26:26.217582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.132 [2024-11-28 08:26:26.229855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.132 [2024-11-28 08:26:26.230294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.132 [2024-11-28 08:26:26.230316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.132 [2024-11-28 08:26:26.230324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.132 [2024-11-28 08:26:26.230508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.132 [2024-11-28 08:26:26.230694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.132 [2024-11-28 08:26:26.230703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.132 [2024-11-28 08:26:26.230710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.132 [2024-11-28 08:26:26.230718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.132 [2024-11-28 08:26:26.243080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.132 [2024-11-28 08:26:26.243527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.132 [2024-11-28 08:26:26.243571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.132 [2024-11-28 08:26:26.243594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.132 [2024-11-28 08:26:26.244191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.132 [2024-11-28 08:26:26.244443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.132 [2024-11-28 08:26:26.244452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.132 [2024-11-28 08:26:26.244459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.132 [2024-11-28 08:26:26.244466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.132 [2024-11-28 08:26:26.256133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.132 [2024-11-28 08:26:26.256569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.132 [2024-11-28 08:26:26.256613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.132 [2024-11-28 08:26:26.256636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.133 [2024-11-28 08:26:26.257232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.133 [2024-11-28 08:26:26.257413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.133 [2024-11-28 08:26:26.257424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.133 [2024-11-28 08:26:26.257430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.133 [2024-11-28 08:26:26.257437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.133 [2024-11-28 08:26:26.269009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.133 [2024-11-28 08:26:26.269438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.133 [2024-11-28 08:26:26.269455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.133 [2024-11-28 08:26:26.269463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.133 [2024-11-28 08:26:26.269630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.133 [2024-11-28 08:26:26.269795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.133 [2024-11-28 08:26:26.269805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.133 [2024-11-28 08:26:26.269811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.133 [2024-11-28 08:26:26.269818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.133 [2024-11-28 08:26:26.281926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.133 [2024-11-28 08:26:26.282335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.133 [2024-11-28 08:26:26.282352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.133 [2024-11-28 08:26:26.282360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.133 [2024-11-28 08:26:26.282524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.133 [2024-11-28 08:26:26.282690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.133 [2024-11-28 08:26:26.282699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.133 [2024-11-28 08:26:26.282706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.133 [2024-11-28 08:26:26.282712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.133 [2024-11-28 08:26:26.294773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.133 [2024-11-28 08:26:26.295214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.133 [2024-11-28 08:26:26.295258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.133 [2024-11-28 08:26:26.295281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.133 [2024-11-28 08:26:26.295654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.133 [2024-11-28 08:26:26.295819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.133 [2024-11-28 08:26:26.295828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.133 [2024-11-28 08:26:26.295835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.133 [2024-11-28 08:26:26.295841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.133 [2024-11-28 08:26:26.307683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.133 [2024-11-28 08:26:26.308085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.133 [2024-11-28 08:26:26.308102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.133 [2024-11-28 08:26:26.308109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.133 [2024-11-28 08:26:26.308274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.133 [2024-11-28 08:26:26.308439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.133 [2024-11-28 08:26:26.308451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.133 [2024-11-28 08:26:26.308458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.133 [2024-11-28 08:26:26.308464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.133 [2024-11-28 08:26:26.320575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.133 [2024-11-28 08:26:26.320908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.133 [2024-11-28 08:26:26.320924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.133 [2024-11-28 08:26:26.320932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.133 [2024-11-28 08:26:26.321124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.133 [2024-11-28 08:26:26.321300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.133 [2024-11-28 08:26:26.321310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.133 [2024-11-28 08:26:26.321317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.133 [2024-11-28 08:26:26.321324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.133 [2024-11-28 08:26:26.333845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.133 [2024-11-28 08:26:26.334224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.133 [2024-11-28 08:26:26.334242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.133 [2024-11-28 08:26:26.334249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.133 [2024-11-28 08:26:26.334423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.133 [2024-11-28 08:26:26.334599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.133 [2024-11-28 08:26:26.334609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.133 [2024-11-28 08:26:26.334616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.133 [2024-11-28 08:26:26.334622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.133 [2024-11-28 08:26:26.346884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.133 [2024-11-28 08:26:26.347313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.133 [2024-11-28 08:26:26.347330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.133 [2024-11-28 08:26:26.347338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.133 [2024-11-28 08:26:26.347502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.133 [2024-11-28 08:26:26.347666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.133 [2024-11-28 08:26:26.347676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.133 [2024-11-28 08:26:26.347682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.133 [2024-11-28 08:26:26.347688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.133 [2024-11-28 08:26:26.359881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.133 [2024-11-28 08:26:26.360267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.133 [2024-11-28 08:26:26.360285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.133 [2024-11-28 08:26:26.360292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.133 [2024-11-28 08:26:26.360456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.133 [2024-11-28 08:26:26.360621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.133 [2024-11-28 08:26:26.360630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.133 [2024-11-28 08:26:26.360636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.133 [2024-11-28 08:26:26.360642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.133 [2024-11-28 08:26:26.372749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.133 [2024-11-28 08:26:26.373157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.133 [2024-11-28 08:26:26.373174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.133 [2024-11-28 08:26:26.373182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.133 [2024-11-28 08:26:26.373346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.133 [2024-11-28 08:26:26.373510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.133 [2024-11-28 08:26:26.373520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.133 [2024-11-28 08:26:26.373526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.133 [2024-11-28 08:26:26.373532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.133 [2024-11-28 08:26:26.385605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.133 [2024-11-28 08:26:26.385961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.134 [2024-11-28 08:26:26.385979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.134 [2024-11-28 08:26:26.385986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.134 [2024-11-28 08:26:26.386150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.134 [2024-11-28 08:26:26.386315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.134 [2024-11-28 08:26:26.386324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.134 [2024-11-28 08:26:26.386331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.134 [2024-11-28 08:26:26.386337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.395 [2024-11-28 08:26:26.398744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.395 [2024-11-28 08:26:26.399111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-11-28 08:26:26.399163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.395 [2024-11-28 08:26:26.399187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.395 [2024-11-28 08:26:26.399679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.395 [2024-11-28 08:26:26.399844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.395 [2024-11-28 08:26:26.399854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.395 [2024-11-28 08:26:26.399860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.395 [2024-11-28 08:26:26.399866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.395 [2024-11-28 08:26:26.411632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.395 [2024-11-28 08:26:26.412034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-11-28 08:26:26.412052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.395 [2024-11-28 08:26:26.412060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.395 [2024-11-28 08:26:26.412223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.395 [2024-11-28 08:26:26.412390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.395 [2024-11-28 08:26:26.412399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.395 [2024-11-28 08:26:26.412405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.395 [2024-11-28 08:26:26.412411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.395 [2024-11-28 08:26:26.424495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.395 [2024-11-28 08:26:26.424921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-11-28 08:26:26.424975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.395 [2024-11-28 08:26:26.425001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.395 [2024-11-28 08:26:26.425570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.395 [2024-11-28 08:26:26.425735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.395 [2024-11-28 08:26:26.425745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.395 [2024-11-28 08:26:26.425751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.395 [2024-11-28 08:26:26.425757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.395 [2024-11-28 08:26:26.437512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.395 [2024-11-28 08:26:26.437944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.395 [2024-11-28 08:26:26.438003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.395 [2024-11-28 08:26:26.438027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.395 [2024-11-28 08:26:26.438618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.395 [2024-11-28 08:26:26.439121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.395 [2024-11-28 08:26:26.439131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.396 [2024-11-28 08:26:26.439137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.396 [2024-11-28 08:26:26.439143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.396 [2024-11-28 08:26:26.450426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.396 [2024-11-28 08:26:26.450855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-11-28 08:26:26.450872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.396 [2024-11-28 08:26:26.450879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.396 [2024-11-28 08:26:26.451069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.396 [2024-11-28 08:26:26.451245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.396 [2024-11-28 08:26:26.451255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.396 [2024-11-28 08:26:26.451263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.396 [2024-11-28 08:26:26.451269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1513026 Killed "${NVMF_APP[@]}" "$@" 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:44.396 [2024-11-28 08:26:26.463592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:44.396 [2024-11-28 08:26:26.463961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-11-28 08:26:26.463980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.396 [2024-11-28 08:26:26.463988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.396 [2024-11-28 08:26:26.464167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.396 [2024-11-28 08:26:26.464347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.396 [2024-11-28 08:26:26.464356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.396 [2024-11-28 08:26:26.464365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.396 [2024-11-28 08:26:26.464372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1514426 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1514426 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1514426 ']' 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.396 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:44.396 [2024-11-28 08:26:26.476668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.396 [2024-11-28 08:26:26.477063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-11-28 08:26:26.477081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.396 [2024-11-28 08:26:26.477090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.396 [2024-11-28 08:26:26.477269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.396 [2024-11-28 08:26:26.477449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.396 [2024-11-28 08:26:26.477459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.396 [2024-11-28 08:26:26.477466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.396 [2024-11-28 08:26:26.477474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.396 [2024-11-28 08:26:26.489749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.396 [2024-11-28 08:26:26.490098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-11-28 08:26:26.490117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.396 [2024-11-28 08:26:26.490125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.396 [2024-11-28 08:26:26.490304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.396 [2024-11-28 08:26:26.490483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.396 [2024-11-28 08:26:26.490492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.396 [2024-11-28 08:26:26.490498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.396 [2024-11-28 08:26:26.490505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.396 [2024-11-28 08:26:26.502970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.396 [2024-11-28 08:26:26.503411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-11-28 08:26:26.503429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.396 [2024-11-28 08:26:26.503437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.396 [2024-11-28 08:26:26.503616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.396 [2024-11-28 08:26:26.503801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.396 [2024-11-28 08:26:26.503811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.396 [2024-11-28 08:26:26.503818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.396 [2024-11-28 08:26:26.503825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.396 [2024-11-28 08:26:26.516107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.396 [2024-11-28 08:26:26.516539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.396 [2024-11-28 08:26:26.516556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.396 [2024-11-28 08:26:26.516564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.396 [2024-11-28 08:26:26.516738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.396 [2024-11-28 08:26:26.516913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.396 [2024-11-28 08:26:26.516923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.396 [2024-11-28 08:26:26.516929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.396 [2024-11-28 08:26:26.516936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.397 [2024-11-28 08:26:26.518342] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:27:44.397 [2024-11-28 08:26:26.518383] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.397 [2024-11-28 08:26:26.529249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.397 [2024-11-28 08:26:26.529687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-11-28 08:26:26.529704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.397 [2024-11-28 08:26:26.529713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.397 [2024-11-28 08:26:26.529886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.397 [2024-11-28 08:26:26.530066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.397 [2024-11-28 08:26:26.530076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.397 [2024-11-28 08:26:26.530083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.397 [2024-11-28 08:26:26.530090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.397 [2024-11-28 08:26:26.542412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.397 [2024-11-28 08:26:26.542863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-11-28 08:26:26.542881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.397 [2024-11-28 08:26:26.542889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.397 [2024-11-28 08:26:26.543073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.397 [2024-11-28 08:26:26.543257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.397 [2024-11-28 08:26:26.543266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.397 [2024-11-28 08:26:26.543273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.397 [2024-11-28 08:26:26.543279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.397 [2024-11-28 08:26:26.555510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.397 [2024-11-28 08:26:26.555931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-11-28 08:26:26.555954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.397 [2024-11-28 08:26:26.555962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.397 [2024-11-28 08:26:26.556142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.397 [2024-11-28 08:26:26.556323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.397 [2024-11-28 08:26:26.556332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.397 [2024-11-28 08:26:26.556339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.397 [2024-11-28 08:26:26.556346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.397 [2024-11-28 08:26:26.568685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.397 [2024-11-28 08:26:26.569121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-11-28 08:26:26.569140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.397 [2024-11-28 08:26:26.569148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.397 [2024-11-28 08:26:26.569323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.397 [2024-11-28 08:26:26.569496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.397 [2024-11-28 08:26:26.569505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.397 [2024-11-28 08:26:26.569512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.397 [2024-11-28 08:26:26.569519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.397 [2024-11-28 08:26:26.581798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.397 [2024-11-28 08:26:26.582216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-11-28 08:26:26.582233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.397 [2024-11-28 08:26:26.582242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.397 [2024-11-28 08:26:26.582417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.397 [2024-11-28 08:26:26.582592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.397 [2024-11-28 08:26:26.582602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.397 [2024-11-28 08:26:26.582615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.397 [2024-11-28 08:26:26.582622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.397 [2024-11-28 08:26:26.585535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:44.397 [2024-11-28 08:26:26.594995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.397 [2024-11-28 08:26:26.595443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-11-28 08:26:26.595464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.397 [2024-11-28 08:26:26.595473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.397 [2024-11-28 08:26:26.595654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.397 [2024-11-28 08:26:26.595835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.397 [2024-11-28 08:26:26.595846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.397 [2024-11-28 08:26:26.595854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.397 [2024-11-28 08:26:26.595861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.397 [2024-11-28 08:26:26.608158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.397 [2024-11-28 08:26:26.608538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-11-28 08:26:26.608556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.397 [2024-11-28 08:26:26.608565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.397 [2024-11-28 08:26:26.608738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.397 [2024-11-28 08:26:26.608914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.397 [2024-11-28 08:26:26.608925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.397 [2024-11-28 08:26:26.608931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.397 [2024-11-28 08:26:26.608938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.397 [2024-11-28 08:26:26.621138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.397 [2024-11-28 08:26:26.621575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.397 [2024-11-28 08:26:26.621593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.398 [2024-11-28 08:26:26.621602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.398 [2024-11-28 08:26:26.621776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.398 [2024-11-28 08:26:26.621957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.398 [2024-11-28 08:26:26.621967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.398 [2024-11-28 08:26:26.621974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.398 [2024-11-28 08:26:26.621998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.398 [2024-11-28 08:26:26.629086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.398 [2024-11-28 08:26:26.629113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.398 [2024-11-28 08:26:26.629120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.398 [2024-11-28 08:26:26.629125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.398 [2024-11-28 08:26:26.629131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.398 [2024-11-28 08:26:26.630484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.398 [2024-11-28 08:26:26.630568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.398 [2024-11-28 08:26:26.630570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.398 [2024-11-28 08:26:26.634209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.398 [2024-11-28 08:26:26.634664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-11-28 08:26:26.634684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.398 [2024-11-28 08:26:26.634694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.398 [2024-11-28 08:26:26.634875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.398 [2024-11-28 08:26:26.635062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.398 [2024-11-28 08:26:26.635073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.398 [2024-11-28 08:26:26.635080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.398 [2024-11-28 08:26:26.635087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.398 [2024-11-28 08:26:26.647370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.398 [2024-11-28 08:26:26.647830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.398 [2024-11-28 08:26:26.647852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.398 [2024-11-28 08:26:26.647861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.398 [2024-11-28 08:26:26.648047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.398 [2024-11-28 08:26:26.648228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.398 [2024-11-28 08:26:26.648238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.398 [2024-11-28 08:26:26.648246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.398 [2024-11-28 08:26:26.648254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.659 [2024-11-28 08:26:26.660533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.659 [2024-11-28 08:26:26.660986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.659 [2024-11-28 08:26:26.661008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.659 [2024-11-28 08:26:26.661017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.659 [2024-11-28 08:26:26.661197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.659 [2024-11-28 08:26:26.661383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.659 [2024-11-28 08:26:26.661392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.659 [2024-11-28 08:26:26.661400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.659 [2024-11-28 08:26:26.661408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.660 [2024-11-28 08:26:26.673694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.660 [2024-11-28 08:26:26.674159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.660 [2024-11-28 08:26:26.674180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.660 [2024-11-28 08:26:26.674188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.660 [2024-11-28 08:26:26.674369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.660 [2024-11-28 08:26:26.674550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.660 [2024-11-28 08:26:26.674560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.660 [2024-11-28 08:26:26.674568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.660 [2024-11-28 08:26:26.674576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.660 [2024-11-28 08:26:26.686860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.660 [2024-11-28 08:26:26.687322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.660 [2024-11-28 08:26:26.687344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.660 [2024-11-28 08:26:26.687353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.660 [2024-11-28 08:26:26.687533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.660 [2024-11-28 08:26:26.687714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.660 [2024-11-28 08:26:26.687725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.660 [2024-11-28 08:26:26.687732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.660 [2024-11-28 08:26:26.687740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.660 [2024-11-28 08:26:26.700038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.660 [2024-11-28 08:26:26.700481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.660 [2024-11-28 08:26:26.700500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.660 [2024-11-28 08:26:26.700508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.660 [2024-11-28 08:26:26.700688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.660 [2024-11-28 08:26:26.700868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.660 [2024-11-28 08:26:26.700878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.660 [2024-11-28 08:26:26.700891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.660 [2024-11-28 08:26:26.700899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.660 [2024-11-28 08:26:26.713189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.660 [2024-11-28 08:26:26.713631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.660 [2024-11-28 08:26:26.713649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.660 [2024-11-28 08:26:26.713657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.660 [2024-11-28 08:26:26.713836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.660 [2024-11-28 08:26:26.714021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.660 [2024-11-28 08:26:26.714032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.660 [2024-11-28 08:26:26.714038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.660 [2024-11-28 08:26:26.714045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.660 [2024-11-28 08:26:26.726317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.660 [2024-11-28 08:26:26.726760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.660 [2024-11-28 08:26:26.726778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.660 [2024-11-28 08:26:26.726787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.660 [2024-11-28 08:26:26.726972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.660 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.660 [2024-11-28 08:26:26.727153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.660 [2024-11-28 08:26:26.727164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.660 [2024-11-28 08:26:26.727170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.660 [2024-11-28 08:26:26.727177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.660 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:44.660 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:44.660 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:44.660 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:44.660 [2024-11-28 08:26:26.739479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.660 [2024-11-28 08:26:26.739923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.660 [2024-11-28 08:26:26.739942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.660 [2024-11-28 08:26:26.739957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.660 [2024-11-28 08:26:26.740139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.660 [2024-11-28 08:26:26.740320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.660 [2024-11-28 08:26:26.740333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.660 [2024-11-28 08:26:26.740341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.660 [2024-11-28 08:26:26.740348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.660 [2024-11-28 08:26:26.752632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.660 [2024-11-28 08:26:26.753005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.660 [2024-11-28 08:26:26.753024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.660 [2024-11-28 08:26:26.753032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.660 [2024-11-28 08:26:26.753211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.660 [2024-11-28 08:26:26.753393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.660 [2024-11-28 08:26:26.753404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.660 [2024-11-28 08:26:26.753411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.660 [2024-11-28 08:26:26.753418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.660 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.660 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:44.660 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.660 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:44.660 [2024-11-28 08:26:26.765700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.661 [2024-11-28 08:26:26.765994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.661 [2024-11-28 08:26:26.766013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.661 [2024-11-28 08:26:26.766021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.661 [2024-11-28 08:26:26.766200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.661 [2024-11-28 08:26:26.766380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.661 [2024-11-28 08:26:26.766390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.661 [2024-11-28 08:26:26.766397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.661 [2024-11-28 08:26:26.766404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.661 [2024-11-28 08:26:26.771428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:44.661 [2024-11-28 08:26:26.778853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.661 [2024-11-28 08:26:26.779157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.661 [2024-11-28 08:26:26.779179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.661 [2024-11-28 08:26:26.779187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.661 [2024-11-28 08:26:26.779366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.661 [2024-11-28 08:26:26.779546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.661 [2024-11-28 08:26:26.779556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.661 [2024-11-28 08:26:26.779563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.661 [2024-11-28 08:26:26.779569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.661 [2024-11-28 08:26:26.792029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.661 [2024-11-28 08:26:26.792471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.661 [2024-11-28 08:26:26.792488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.661 [2024-11-28 08:26:26.792496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.661 [2024-11-28 08:26:26.792675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.661 [2024-11-28 08:26:26.792856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.661 [2024-11-28 08:26:26.792866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.661 [2024-11-28 08:26:26.792872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.661 [2024-11-28 08:26:26.792879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.661 [2024-11-28 08:26:26.805178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.661 [2024-11-28 08:26:26.805599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.661 [2024-11-28 08:26:26.805617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.661 [2024-11-28 08:26:26.805626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.661 [2024-11-28 08:26:26.805805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.661 [2024-11-28 08:26:26.805990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.661 [2024-11-28 08:26:26.806001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.661 [2024-11-28 08:26:26.806008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.661 [2024-11-28 08:26:26.806015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.661 Malloc0 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:44.661 [2024-11-28 08:26:26.818287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.661 [2024-11-28 08:26:26.818734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.661 [2024-11-28 08:26:26.818751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219510 with addr=10.0.0.2, port=4420 00:27:44.661 [2024-11-28 08:26:26.818759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219510 is same with the state(6) to be set 00:27:44.661 [2024-11-28 08:26:26.818937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219510 (9): Bad file descriptor 00:27:44.661 [2024-11-28 08:26:26.819125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:44.661 [2024-11-28 08:26:26.819136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:44.661 [2024-11-28 08:26:26.819142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:44.661 [2024-11-28 08:26:26.819149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:44.661 [2024-11-28 08:26:26.831411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.661 [2024-11-28 08:26:26.831424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.661 08:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1513458 00:27:44.661 [2024-11-28 08:26:26.898365] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:46.043 4656.00 IOPS, 18.19 MiB/s [2024-11-28T07:26:29.249Z] 5543.43 IOPS, 21.65 MiB/s [2024-11-28T07:26:30.185Z] 6216.62 IOPS, 24.28 MiB/s [2024-11-28T07:26:31.124Z] 6752.11 IOPS, 26.38 MiB/s [2024-11-28T07:26:32.061Z] 7155.70 IOPS, 27.95 MiB/s [2024-11-28T07:26:32.999Z] 7500.00 IOPS, 29.30 MiB/s [2024-11-28T07:26:33.938Z] 7775.58 IOPS, 30.37 MiB/s [2024-11-28T07:26:35.316Z] 8010.23 IOPS, 31.29 MiB/s [2024-11-28T07:26:36.254Z] 8206.29 IOPS, 32.06 MiB/s 00:27:53.985 Latency(us) 00:27:53.985 [2024-11-28T07:26:36.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.985 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:53.985 Verification LBA range: start 0x0 length 0x4000 00:27:53.985 Nvme1n1 : 15.01 8389.50 32.77 10777.97 0.00 6657.47 448.78 23934.89 00:27:53.985 [2024-11-28T07:26:36.254Z] =================================================================================================================== 00:27:53.985 [2024-11-28T07:26:36.254Z] Total : 8389.50 32.77 10777.97 0.00 6657.47 448.78 23934.89 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:53.985 rmmod nvme_tcp 00:27:53.985 rmmod nvme_fabrics 00:27:53.985 rmmod nvme_keyring 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1514426 ']' 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1514426 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1514426 ']' 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1514426 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1514426 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1514426' 00:27:53.985 killing process with pid 1514426 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1514426 00:27:53.985 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1514426 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.245 08:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.784 08:26:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:56.784 00:27:56.784 real 0m25.781s 00:27:56.784 user 1m0.640s 00:27:56.784 sys 0m6.613s 00:27:56.784 08:26:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.784 08:26:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:56.784 ************************************ 00:27:56.784 END TEST nvmf_bdevperf 00:27:56.784 ************************************ 00:27:56.784 08:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:56.784 08:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:56.784 08:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.785 ************************************ 00:27:56.785 START TEST nvmf_target_disconnect 00:27:56.785 ************************************ 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:56.785 * Looking for test storage... 00:27:56.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:56.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.785 --rc genhtml_branch_coverage=1 00:27:56.785 --rc genhtml_function_coverage=1 00:27:56.785 --rc genhtml_legend=1 00:27:56.785 --rc geninfo_all_blocks=1 00:27:56.785 --rc geninfo_unexecuted_blocks=1 00:27:56.785 00:27:56.785 ' 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:56.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.785 --rc genhtml_branch_coverage=1 00:27:56.785 --rc genhtml_function_coverage=1 00:27:56.785 --rc genhtml_legend=1 00:27:56.785 --rc geninfo_all_blocks=1 00:27:56.785 --rc geninfo_unexecuted_blocks=1 00:27:56.785 00:27:56.785 ' 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:56.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.785 --rc genhtml_branch_coverage=1 00:27:56.785 --rc genhtml_function_coverage=1 00:27:56.785 --rc genhtml_legend=1 00:27:56.785 --rc geninfo_all_blocks=1 00:27:56.785 --rc geninfo_unexecuted_blocks=1 00:27:56.785 00:27:56.785 ' 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:56.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.785 --rc genhtml_branch_coverage=1 00:27:56.785 --rc genhtml_function_coverage=1 00:27:56.785 --rc genhtml_legend=1 00:27:56.785 --rc geninfo_all_blocks=1 00:27:56.785 --rc geninfo_unexecuted_blocks=1 00:27:56.785 00:27:56.785 ' 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.785 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:56.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.786 08:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:02.066 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:02.066 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:02.066 Found net devices under 0000:86:00.0: cvl_0_0 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:02.066 Found net devices under 0000:86:00.1: cvl_0_1 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.066 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:02.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:28:02.327 00:28:02.327 --- 10.0.0.2 ping statistics --- 00:28:02.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.327 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:28:02.327 00:28:02.327 --- 10.0.0.1 ping statistics --- 00:28:02.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.327 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:02.327 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:02.587 ************************************ 00:28:02.587 START TEST nvmf_target_disconnect_tc1 00:28:02.587 ************************************ 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:02.587 [2024-11-28 08:26:44.707594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.587 [2024-11-28 08:26:44.707714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1bac0 with addr=10.0.0.2, port=4420 00:28:02.587 [2024-11-28 08:26:44.707775] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:02.587 [2024-11-28 08:26:44.707802] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:02.587 [2024-11-28 08:26:44.707824] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:02.587 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:02.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:02.587 Initializing NVMe Controllers 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:02.587 00:28:02.587 real 0m0.111s 00:28:02.587 user 0m0.050s 00:28:02.587 sys 0m0.061s 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.587 ************************************ 00:28:02.587 END TEST nvmf_target_disconnect_tc1 00:28:02.587 ************************************ 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:02.587 ************************************ 00:28:02.587 START TEST nvmf_target_disconnect_tc2 00:28:02.587 ************************************ 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1519586 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1519586 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1519586 ']' 00:28:02.587 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.588 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.588 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.588 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.588 08:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.588 [2024-11-28 08:26:44.849725] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:28:02.588 [2024-11-28 08:26:44.849772] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.847 [2024-11-28 08:26:44.931685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:02.847 [2024-11-28 08:26:44.973955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.847 [2024-11-28 08:26:44.974010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.847 [2024-11-28 08:26:44.974017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.847 [2024-11-28 08:26:44.974024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.847 [2024-11-28 08:26:44.974030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.847 [2024-11-28 08:26:44.975581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:02.847 [2024-11-28 08:26:44.975686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:02.847 [2024-11-28 08:26:44.975796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:02.847 [2024-11-28 08:26:44.975797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.784 Malloc0 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.784 [2024-11-28 08:26:45.766352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.784 [2024-11-28 08:26:45.798623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1519639 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:03.784 08:26:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:05.703 08:26:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1519586 00:28:05.703 08:26:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:05.703 Write completed with error (sct=0, sc=8) 00:28:05.703 starting I/O failed 00:28:05.703 Read completed with error (sct=0, sc=8) 00:28:05.703 starting I/O failed 00:28:05.703 Read completed with error (sct=0, sc=8) 00:28:05.703 starting I/O failed 00:28:05.703 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 [2024-11-28 08:26:47.827570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 [2024-11-28 08:26:47.827776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 [2024-11-28 08:26:47.827978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Read completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.704 Write completed with error (sct=0, sc=8) 00:28:05.704 starting I/O failed 00:28:05.705 Read completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Read completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Write completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Write completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Write completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Read completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Read completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Write completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Write completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Write completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Write completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Read completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Read completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Read completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Read completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 Read completed with error (sct=0, sc=8) 00:28:05.705 starting I/O failed 00:28:05.705 [2024-11-28 08:26:47.828172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:05.705 [2024-11-28 08:26:47.828359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.828387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.828643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.828656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.828767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.828790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.828962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.829000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.829278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.829313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.829441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.829481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.829630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.829642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.829789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.829822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.829962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.829998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.830134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.830168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.830305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.830338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.830468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.830481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.830678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.830710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.830903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.830936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.831149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.831184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.831289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.831303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.831526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.831560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.831841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.831875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.832019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.832032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.832181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.832193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.832428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.832461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.832662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.832696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.832835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.832868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.832985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.832997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.833107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.833119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.833331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.833363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.833556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.833590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.833807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.833841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.834124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.834136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.834231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.834242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.834335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.834346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.705 [2024-11-28 08:26:47.834552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.705 [2024-11-28 08:26:47.834565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.705 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.834652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.834662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.834860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.834893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.835092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.835128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.835313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.835326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.835405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.835416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.835494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.835506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.835635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.835647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.835796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.835808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.835891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.835923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.836075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.836110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.836323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.836384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.836544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.836565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.836741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.836776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.836963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.836998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.837203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.837236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.837483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.837498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.837579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.837594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.837764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.837799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.838005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.838040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.838171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.838204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.838331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.838364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.838550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.838583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.838835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.838868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.839050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.839084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.839220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.839253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.839499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.839515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.839607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.839621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.839728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.839742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.839830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.839845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.839991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.840007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.840167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.840182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.840270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.840285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.840362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.840376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.840472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.840487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.840645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.840659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.840819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.840834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.840992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.841009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.841169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.706 [2024-11-28 08:26:47.841188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.706 qpair failed and we were unable to recover it. 00:28:05.706 [2024-11-28 08:26:47.841280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.841295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.841447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.841463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.841640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.841655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.841841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.841874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.841997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.842031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.842240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.842274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.842399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.842414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.842513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.842528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.842741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.842757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.842921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.842938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.843060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.843094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.843277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.843309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.843500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.843533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.843719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.843752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.843891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.843924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.844184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.844218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.844400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.844432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.844614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.844647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.844817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.844850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.844970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.844986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.845202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.845235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.845425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.845458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.845729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.845760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.845957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.845992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.846171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.846204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.846331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.846347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.846557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.846577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.846664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.846679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.846894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.846927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.847060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.847094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.847337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.847370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.847545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.847561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.847654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.847669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.847822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.847839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.847985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.848019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.848262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.848295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.848406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.848439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.848631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.848663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.707 [2024-11-28 08:26:47.848783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.707 [2024-11-28 08:26:47.848817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.707 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.849007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.849042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.849319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.849353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.849473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.849490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.849635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.849651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.849731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.849747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.849919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.849933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.850036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.850047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.850192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.850225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.850341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.850375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.850585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.850618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.850831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.850864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.851079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.851115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.851333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.851367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.851561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.851595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.851866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.851907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.852057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.852070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.852136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.852148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.852233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.852244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.852459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.852471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.852620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.852632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.852781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.852814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.853020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.853043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.853298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.853332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.853472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.853505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.853719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.853753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.853979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.854015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.854285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.854299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.854499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.854510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.854659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.854671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.854817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.854850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.708 [2024-11-28 08:26:47.854986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.708 [2024-11-28 08:26:47.855021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.708 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.855151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.855184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.855396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.855429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.855617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.855650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.855856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.855889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.856117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.856152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.856275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.856287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.856379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.856390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.856532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.856565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.856833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.856866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.857086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.857120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.857342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.857381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.857571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.857604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.857859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.857892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.858111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.858145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.858384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.858396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.858610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.858643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.858836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.858869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.859153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.859188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.859435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.859468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.859591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.859604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.859748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.859760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.859850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.859861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.860056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.860068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.860220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.860233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.860385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.860419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.860608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.860642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.860780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.860813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.861038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.861050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.861147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.861158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.861368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.861402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.861578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.861611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.861743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.861776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.861971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.862006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.862213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.862247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.862515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.862549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.862753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.862786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.862983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.862995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.709 [2024-11-28 08:26:47.863143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.709 [2024-11-28 08:26:47.863175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.709 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.863450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.863483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.863672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.863706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.863977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.864012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.864114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.864125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.864291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.864323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.864447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.864480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.864722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.864756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.864890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.864923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.865065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.865099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.865297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.865331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.865541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.865554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.865641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.865678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.865812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.865851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.866055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.866091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.866286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.866298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.866528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.866561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.866694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.866728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.866923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.866968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.867087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.867121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.867307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.867342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.867520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.867552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.867751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.867784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.867970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.868006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.868297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.868330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.868575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.868609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.868785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.868818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.869105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.869141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.869331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.869365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.869572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.869584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.869764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.869796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.869929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.870003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.870223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.870257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.870494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.870505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.870652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.870664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.870743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.870753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.870962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.870997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.871182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.871215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.710 qpair failed and we were unable to recover it. 00:28:05.710 [2024-11-28 08:26:47.871388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.710 [2024-11-28 08:26:47.871421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.871548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.871582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.871697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.871729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.871946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.871992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.872181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.872213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.872458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.872491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.872618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.872651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.872777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.872810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.873002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.873048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.873276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.873288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.873437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.873449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.873621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.873655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.873922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.873986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.874172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.874206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.874385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.874418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.874607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.874646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.874894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.874927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.875081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.875115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.875387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.875421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.875598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.875632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.875770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.875803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.876075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.876116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.876260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.876272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.876477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.876510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.876658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.876692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.876892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.876925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.877057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.877092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.877368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.877402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.877663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.877697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.877972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.878008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.878279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.878313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.878496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.878530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.878722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.878756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.878967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.879002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.879213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.879246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.879453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.879486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.879665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.879697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.879880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.879913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.711 qpair failed and we were unable to recover it. 00:28:05.711 [2024-11-28 08:26:47.880117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.711 [2024-11-28 08:26:47.880151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.880276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.880309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.880486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.880520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.880779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.880812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.880931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.880975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.881092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.881126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.881373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.881405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.881584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.881617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.881749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.881782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.882008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.882044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.882217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.882228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.882390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.882423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.882600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.882633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.882761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.882795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.882994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.883029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.883151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.883184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.883373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.883407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.883595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.883633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.883821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.883855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.884038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.884074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.884198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.884210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.884291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.884302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.884446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.884480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.884667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.884700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.884825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.884858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.885117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.885152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.885349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.885361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.885565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.885598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.885735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.885768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.885983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.886018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.886161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.886194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.886313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.886347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.886554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.886586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.886777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.886810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.886919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.886963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.887157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.887190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.887375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.887407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.887583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.712 [2024-11-28 08:26:47.887617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.712 qpair failed and we were unable to recover it. 00:28:05.712 [2024-11-28 08:26:47.887807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.887840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.888105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.888141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.888411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.888445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.888626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.888659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.888808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.888841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.889039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.889074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.889277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.889310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.889497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.889530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.889718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.889751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.889986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.890021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.890134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.890146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.890287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.890298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.890439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.890451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.890542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.890576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.890778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.890812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.891027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.891062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.891361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.891373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.891449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.891459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.891612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.891624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.891923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.891975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.892109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.892142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.892337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.892349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.892424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.892435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.892671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.892704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.892846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.892880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.893139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.893175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.893298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.893324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.893524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.893536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.893666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.893678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.893753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.893764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.893921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.893963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.894156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.894189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.894396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.894430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.713 [2024-11-28 08:26:47.894595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.713 [2024-11-28 08:26:47.894607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.713 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.894736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.894748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.894884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.894896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.895106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.895141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.895403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.895436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.895636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.895669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.895872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.895884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.896018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.896031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.896118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.896129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.896282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.896316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.896440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.896472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.896601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.896634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.896775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.896808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.897003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.897039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.897290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.897324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.897510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.897521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.897696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.897728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.897853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.897887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.898021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.898055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.898299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.898332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.898506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.898540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.898655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.898688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.898929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.898975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.899158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.899192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.899325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.899358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.899625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.899659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.899923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.899972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.900150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.900184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.900307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.900340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.900532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.900574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.900714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.900726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.900808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.900819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.900898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.900909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.901041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.901054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.901118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.901130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.901224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.901235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.901397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.901430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.901622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.901655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.714 [2024-11-28 08:26:47.901783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.714 [2024-11-28 08:26:47.901816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.714 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.902027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.902061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.902257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.902291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.902476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.902510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.902696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.902730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.902926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.902968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.903214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.903247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.903433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.903466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.903704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.903738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.903942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.903985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.904252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.904285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.904413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.904446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.904659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.904692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.904968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.905003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.905202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.905214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.905313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.905323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.905509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.905542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.905742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.905776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.905982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.906018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.906284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.906296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.906511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.906545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.906682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.906716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.906911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.906945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.907140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.907174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.907297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.907331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.907539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.907572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.907761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.907794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.908001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.908037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.908311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.908325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.908422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.908432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.908522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.908534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.908629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.908662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.908836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.908870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.909140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.909174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.909338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.909350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.909431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.909442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.909531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.909542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.909685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.909697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.715 qpair failed and we were unable to recover it. 00:28:05.715 [2024-11-28 08:26:47.909782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.715 [2024-11-28 08:26:47.909792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.909936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.909998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.910125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.910160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.910341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.910373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.910509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.910551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.910644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.910656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.910746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.910757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.910841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.910851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.910932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.910944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.911097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.911109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.911258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.911270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.911415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.911427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.911503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.911514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.911592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.911602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.911739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.911751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.911834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.911844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.911913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.911923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.912059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.912096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.912208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.912247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.912375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.912408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.912528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.912559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.912702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.912736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.912845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.912881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.913111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.913145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.913340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.913372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.913490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.913523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.913782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.913798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.913973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.913990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.914143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.914159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.914308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.914323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.914475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.914507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.914697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.914729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.914923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.914965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.915163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.915195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.915439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.915471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.915594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.915626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.915764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.915797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.916000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.916032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.716 [2024-11-28 08:26:47.916282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.716 [2024-11-28 08:26:47.916314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.716 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.916492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.916525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.916713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.916729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.916887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.916920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.917120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.917154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.917347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.917380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.917560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.917598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.917783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.917816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.917995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.918029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.918325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.918358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.918579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.918611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.918733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.918766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.919028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.919061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.919250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.919282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.919423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.919454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.919643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.919675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.919870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.919902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.920166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.920199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.920328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.920361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.920467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.920482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.920632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.920668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.920794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.920828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.920940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.920981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.921173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.921206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.921440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.921457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.921622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.921638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.921797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.921829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.922011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.922045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.922223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.922255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.922457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.922474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.922629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.922661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.922935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.922977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.923250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.923283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.923396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.923441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.717 [2024-11-28 08:26:47.923704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.717 [2024-11-28 08:26:47.923736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.717 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.923981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.924015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.924232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.924264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.924405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.924438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.924637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.924669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.924926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.924967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.925236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.925268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.925442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.925475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.925681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.925712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.925906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.925939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.926074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.926108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.926352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.926384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.926652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.926683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.926845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.926919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.927120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.927159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.927302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.927335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.927523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.927555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.927799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.927832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.927967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.928003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.928205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.928247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.928343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.928354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.928484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.928495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.928721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.928755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.928870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.928903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.929095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.929130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.929328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.929362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.929488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.929526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.929738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.929749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.929906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.929940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.930076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.930110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.930308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.930342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.930527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.930539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.930746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.930779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.930970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.931005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.931204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.931238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.931505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.931517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.931713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.931745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.931873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.931907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.718 qpair failed and we were unable to recover it. 00:28:05.718 [2024-11-28 08:26:47.932202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.718 [2024-11-28 08:26:47.932236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.932382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.932427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.932599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.932611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.932776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.932809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.933005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.933040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.933225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.933257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.933499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.933511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.933732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.933744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.933820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.933831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.934060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.934096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.934272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.934305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.934511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.934545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.934725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.934736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.934899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.934932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.935194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.935228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.935485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.935558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.935870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.935907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.936250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.936287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.936536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.936573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.936831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.936875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.937018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.937054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.937250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.937283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.937463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.937496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.937678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.937710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.937898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.937931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.938188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.938221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.938492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.938508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.938738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.938754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.938928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.938952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.939059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.939091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.939341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.939374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.939608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.939640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.939763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.939795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.940064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.940099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.940301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.940333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.940518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.940550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.940846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.940878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.941059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.941093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.719 [2024-11-28 08:26:47.941216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.719 [2024-11-28 08:26:47.941249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:05.719 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.941366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.941380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.941516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.941528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.941629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.941640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.941771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.941783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.941926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.941937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.942101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.942136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.942277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.942309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.942451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.942484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.942681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.942714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.942911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.942943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.943096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.943130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.943310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.943343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.943603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.943636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.943857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.943868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.944014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.944026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.944159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.944199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.944384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.944417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.944679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.944726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.944804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.944815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.945029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.945064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.945188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.945221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.945493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.945526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.945765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.945777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.945931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.945991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.946197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.946230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.946436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.946470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.946686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.946719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.946963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.946997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.947185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.947218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.947455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.947471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.947652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.947685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.947874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.947907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.948039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.948073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.948271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.948316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.948483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.948495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.948590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.948623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.948748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.948782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.720 qpair failed and we were unable to recover it. 00:28:05.720 [2024-11-28 08:26:47.948974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.720 [2024-11-28 08:26:47.949009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.949154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.949187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.949433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.949466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.949657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.949699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.949778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.949789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.949953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.949999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.950142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.950176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.950371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.950404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.950538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.950579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.950779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.950791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.950890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.950900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.951034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.951046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.951286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.951318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.951568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.951600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.951794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.951826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.952031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.952067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.952275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.952309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.952500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.952533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.952780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.952813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.952929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.952974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.953271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.953304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.953543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.953556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.953729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.953763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.953988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.954022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.954215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.954248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.954533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.954545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.954680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.954692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.954827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.954839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.954980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.954993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.955202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.955214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.955354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.955366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.955508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.955520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.955583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.955596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.955750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.955783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.955967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.956001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.956190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.956224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.956487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.956520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.956704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.721 [2024-11-28 08:26:47.956716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.721 qpair failed and we were unable to recover it. 00:28:05.721 [2024-11-28 08:26:47.956850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.956862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.957092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.957104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.957207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.957218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.957363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.957377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.957578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.957590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.957725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.957737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.957832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.957843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.957974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.957987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.958167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.958201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.958328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.958362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.958550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.958583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.958722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.958755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.959011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.959045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.959173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.959206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.959465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.959477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.959646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.959658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:05.722 [2024-11-28 08:26:47.959807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.722 [2024-11-28 08:26:47.959819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:05.722 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.959978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.959991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.960078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.960089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.960243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.960255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.960409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.960421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.960561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.960575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.960649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.960660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.960738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.960749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.960894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.960904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.960971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.960982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.961123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.961134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.961213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.961224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.961374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.961387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.961547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.961559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.961646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.961657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.961736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.013 [2024-11-28 08:26:47.961747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.013 qpair failed and we were unable to recover it. 00:28:06.013 [2024-11-28 08:26:47.961831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.961842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.961917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.961928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.962008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.962020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.962113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.962125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.962211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.962222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.962447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.962460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.962592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.962604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.962691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.962702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.962776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.962787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.962856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.962868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.963003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.963015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.963172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.963184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.963351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.963385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.963566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.963599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.963846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.963879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.964011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.964046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.964189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.964222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.964485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.964497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.964651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.964664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.964893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.964926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.965061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.965095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.965250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.965283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.965413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.965426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.965656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.965689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.965932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.965987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.966181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.966213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.966432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.966444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.966551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.966583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.966713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.966746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.966934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.014 [2024-11-28 08:26:47.966986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.014 qpair failed and we were unable to recover it. 00:28:06.014 [2024-11-28 08:26:47.967166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.967199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.967332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.967365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.967501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.967535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.967651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.967663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.967831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.967843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.967940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.967956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.968031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.968042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.968211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.968223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.968307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.968318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.968392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.968403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.968547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.968559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.968648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.968659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.968797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.968809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.969036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.969071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.969351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.969389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.969551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.969563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.969705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.969717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.969802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.969813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.969980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.969992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.970123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.970135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.970336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.970349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.970552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.970563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.970710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.970724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.970862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.970874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.971074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.971087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.971265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.971277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.971418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.971430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.971507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.971517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.015 qpair failed and we were unable to recover it. 00:28:06.015 [2024-11-28 08:26:47.971581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.015 [2024-11-28 08:26:47.971592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.971725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.971735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.971957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.971970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.972062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.972073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.972208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.972221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.972444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.972456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.972545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.972556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.972637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.972648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.972714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.972725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.972825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.972836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.972980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.972993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.973081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.973094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.973162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.973173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.973240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.973251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.973334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.973346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.973486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.973498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.973571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.973582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.973652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.973664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.973803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.973815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.973966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.973978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.974070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.974081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.974158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.974169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.974234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.974245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.974329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.974341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.974492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.974504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.974597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.974608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.974675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.974686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.974753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.974764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.974928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.974940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.016 [2024-11-28 08:26:47.975171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.016 [2024-11-28 08:26:47.975183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.016 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.975272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.975283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.975371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.975384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.975458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.975469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.975551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.975562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.975693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.975705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.975836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.975848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.976013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.976026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.976231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.976243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.976325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.976336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.976423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.976434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.976595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.976608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.976758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.976770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.976860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.976872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.976961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.976972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.977052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.977063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.977144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.977156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.977236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.977247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.977395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.977407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.977476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.977487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.977696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.977708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.977804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.977816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.978015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.978030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.978191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.978203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.978351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.978364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.978449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.978459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.978656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.978668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.978817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.978829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.978979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.978991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.017 [2024-11-28 08:26:47.979069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.017 [2024-11-28 08:26:47.979079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.017 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.979162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.979173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.979251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.979263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.979437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.979449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.979660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.979671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.979805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.979817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.979899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.979910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.980000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.980012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.980086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.980097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.980259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.980271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.980361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.980372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.980508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.980520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.980682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.980694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.980769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.980779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.980855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.980866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.981094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.981107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.981250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.981262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.981355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.981366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.981446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.981457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.981530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.981541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.981673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.981684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.981814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.981826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.981901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.981912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.982061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.982074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.982171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.982183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.982246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.982257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.982341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.982351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.982482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.982493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.982672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.982688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.982754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.982764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.018 qpair failed and we were unable to recover it. 00:28:06.018 [2024-11-28 08:26:47.982834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.018 [2024-11-28 08:26:47.982844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.982921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.982933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.982997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.983008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.983229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.983243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.983321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.983333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.983465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.983476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.983547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.983558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.983643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.983655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.983722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.983733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.983885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.983897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.984044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.984056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.984227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.984239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.984374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.984386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.984475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.984486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.984579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.984591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.984795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.984807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.984961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.984973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.985128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.985141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.985282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.985294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.985452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.985464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.985531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.985542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.985741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.985753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.985886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.985898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.985967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.985979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.019 qpair failed and we were unable to recover it. 00:28:06.019 [2024-11-28 08:26:47.986125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.019 [2024-11-28 08:26:47.986137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.986231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.986242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.986396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.986408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.986542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.986554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.986762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.986774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.986842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.986853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.987071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.987083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.987285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.987296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.987387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.987399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.987493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.987505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.987643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.987654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.987824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.987836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.987917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.987930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.988009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.988021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.988098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.988109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.988178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.988189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.988336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.988347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.988436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.988447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.988579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.988590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.988732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.988746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.988813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.988824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.988906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.988918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.989058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.989070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.989153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.989166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.989319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.989331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.989466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.989477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.989556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.989567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.989713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.989725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.020 qpair failed and we were unable to recover it. 00:28:06.020 [2024-11-28 08:26:47.989796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.020 [2024-11-28 08:26:47.989810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.989882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.989892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.990034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.990046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.990246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.990257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.990401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.990412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.990491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.990503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.990677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.990689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.990755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.990766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.990987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.990999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.991090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.991101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.991175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.991186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.991259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.991270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.991483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.991495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.991567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.991579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.991725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.991737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.991891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.991903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.992092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.992104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.992172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.992183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.992272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.992284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.992414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.992425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.992565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.992578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.992641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.992652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.992740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.992752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.992898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.992910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.993064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.993077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.993220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.993232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.993393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.993405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.993561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.993573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.993672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.021 [2024-11-28 08:26:47.993684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.021 qpair failed and we were unable to recover it. 00:28:06.021 [2024-11-28 08:26:47.993759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.993771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.993915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.993926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.994016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.994030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.994110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.994122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.994216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.994227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.994307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.994319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.994454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.994466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.994548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.994560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.994643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.994655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.994791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.994802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.994867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.994878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.995028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.995041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.995174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.995186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.995384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.995395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.995472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.995484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.995558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.995570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.995640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.995652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.995743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.995754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.995999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.996012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.996097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.996109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.996176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.996186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.996260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.996272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.996406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.996418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.996593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.996605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.996690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.996702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.996845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.996856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.997005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.997017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.022 [2024-11-28 08:26:47.997162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.022 [2024-11-28 08:26:47.997174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.022 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.997418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.997430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.997511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.997523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.997598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.997610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.997774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.997786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.997928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.997940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.998108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.998120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.998182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.998193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.998281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.998293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.998367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.998378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.998525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.998536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.998677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.998689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.998802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.998834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.999024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.999059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.999205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.999238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.999355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.999394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.999583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.999616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:47.999793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:47.999826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.000069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.000103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.000285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.000318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.000500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.000533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.000655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.000688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.000935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.000982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.001178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.001211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.001484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.001517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.001647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.001659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.001728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.001739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.001844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.001877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.002078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.002113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.002300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.002332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.002474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.002506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.023 qpair failed and we were unable to recover it. 00:28:06.023 [2024-11-28 08:26:48.002701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.023 [2024-11-28 08:26:48.002734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.002934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.002978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.003168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.003201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.003346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.003378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.003598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.003631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.003820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.003852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.003981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.004014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.004260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.004293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.004419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.004451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.004573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.004607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.004736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.004768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.005062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.005136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.005346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.005382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.005545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.005561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.005651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.005685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.005930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.005976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.006170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.006202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.006387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.006420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.006544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.006575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.006705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.006737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.006997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.007012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.007113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.007128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.007291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.007306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.007393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.007407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.007592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.007632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.007881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.007913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.008104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.008138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.008268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.008301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.024 qpair failed and we were unable to recover it. 00:28:06.024 [2024-11-28 08:26:48.008575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.024 [2024-11-28 08:26:48.008607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.008883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.008898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.009065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.009082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.009243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.009259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.009409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.009425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.009661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.009693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.009818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.009850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.009972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.010007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.010137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.010168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.010363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.010395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.010602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.010618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.010759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.010774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.010930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.010950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.011110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.011125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.011218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.011233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.011452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.011483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.011684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.011717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.011857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.011888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.012015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.012049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.012228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.012261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.012391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.012422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.012549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.012565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.012773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.012788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.025 qpair failed and we were unable to recover it. 00:28:06.025 [2024-11-28 08:26:48.012937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.025 [2024-11-28 08:26:48.012956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.013132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.013144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.013308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.013340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.013531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.013564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.013770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.013803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.013964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.013993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.014134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.014167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.014311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.014344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.014474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.014508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.014693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.014705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.014851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.014862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.015059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.015072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.015166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.015177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.015328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.015342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.015438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.015449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.015592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.015604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.015746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.015758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.015983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.015995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.016081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.016092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.016165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.016176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.016274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.016285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.016346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.016357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.016497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.016507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.016657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.016690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.016890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.016923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.017231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.017265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.017442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.017475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.017624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.017657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.017789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.017800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.017875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.017886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.018041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.026 [2024-11-28 08:26:48.018054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.026 qpair failed and we were unable to recover it. 00:28:06.026 [2024-11-28 08:26:48.018258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.018293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.018441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.018474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.018588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.018622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.018807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.018819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.018967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.019000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.019203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.019236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.019425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.019459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.019634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.019647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.019800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.019833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.020043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.020081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.020329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.020361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.020553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.020585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.020761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.020794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.021000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.021017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.021233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.021265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.021509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.021541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.021653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.021685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.021859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.021875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.022082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.022099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.022363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.022396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.022582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.022615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.022887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.022919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.023175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.023259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.023568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.023606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.023761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.023779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.023876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.023889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.024109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.024121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.024259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.024271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.027 [2024-11-28 08:26:48.024419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.027 [2024-11-28 08:26:48.024431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.027 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.024633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.024658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.024871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.024903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.025046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.025079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.025261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.025294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.025562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.025594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.025841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.025873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.026013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.026048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.026258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.026291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.026412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.026444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.026691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.026723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.026856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.026868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.026953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.026964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.027100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.027132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.027247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.027280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.027497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.027529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.027639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.027651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.027744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.027755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.027895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.027928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.028199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.028232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.028365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.028398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.028529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.028548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.028723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.028739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.028830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.028845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.028935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.028955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.029169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.029185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.029304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.029336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.029536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.029569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.029863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.029897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.030109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.030143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.030274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.030307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.028 [2024-11-28 08:26:48.030431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.028 [2024-11-28 08:26:48.030448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.028 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.030693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.030724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.030903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.030936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.031218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.031260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.031453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.031485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.031670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.031703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.031903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.031936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.032145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.032178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.032363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.032395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.032571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.032587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.032678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.032724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.032853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.032886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.033095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.033131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.033345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.033377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.033566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.033598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.033778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.033811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.034005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.034022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.034168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.034185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.034333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.034349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.034490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.034506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.034711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.034727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.034903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.034937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.035067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.035100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.035293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.035325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.035507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.035540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.035783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.035828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.035995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.036012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.036160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.036193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.036390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.036423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.029 qpair failed and we were unable to recover it. 00:28:06.029 [2024-11-28 08:26:48.036611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.029 [2024-11-28 08:26:48.036649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.036805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.036820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.036943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.036985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.037162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.037196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.037392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.037425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.037605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.037622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.037713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.037729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.037960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.037978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.038078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.038093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.038247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.038286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.038405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.038438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.038705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.038738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.038868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.038885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.038979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.038995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.039135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.039152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.039319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.039363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.039486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.039518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.039707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.039739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.039921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.039964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.040168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.040201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.040413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.040446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.040567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.040583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.040690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.040706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.040855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.040871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.041080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.041097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.041210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.041244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.041420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.041453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.041650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.041684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.041880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.041896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.030 [2024-11-28 08:26:48.042049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.030 [2024-11-28 08:26:48.042082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.030 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.042218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.042251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.042496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.042529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.042789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.042805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.042960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.042976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.043119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.043136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.043287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.043302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.043406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.043422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.043589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.043621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.043742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.043774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.043970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.044005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.044193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.044226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.044360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.044398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.044602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.044618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.044842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.044876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.045016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.045050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.045238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.045270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.045449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.045483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.045745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.045761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.045912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.045928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.046137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.046156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.046242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.046258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.046347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.046362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.031 [2024-11-28 08:26:48.046456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.031 [2024-11-28 08:26:48.046471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.031 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.046628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.046661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.046785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.046818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.047019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.047055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.047323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.047356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.047483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.047499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.047587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.047602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.047809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.047826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.048064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.048082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.048167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.048182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.048346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.048362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.048515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.048532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.048637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.048669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.048858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.048891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.049144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.049189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.049415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.049431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.049577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.049593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.049849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.049881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.050157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.050192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.050321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.050354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.050628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.050661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.050888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.050921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.051115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.051148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.051278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.051311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.051530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.051563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.051757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.051790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.051979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.052014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.052222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.052254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.052438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.052472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.052649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.052668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.032 qpair failed and we were unable to recover it. 00:28:06.032 [2024-11-28 08:26:48.052822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.032 [2024-11-28 08:26:48.052855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.052995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.053030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.053231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.053264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.053445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.053477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.053737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.053769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.053896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.053928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.054219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.054253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.054500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.054533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.054713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.054730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.054880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.054913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.055137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.055171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.055294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.055327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.055523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.055556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.055687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.055721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.055915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.055959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.056086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.056119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.056383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.056429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.056614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.056630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.056787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.056802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.056915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.056957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.057148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.057181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.057374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.057406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.057679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.057711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.057908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.057941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.058153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.058187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.058325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.058357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.058500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.058533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.058720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.058752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.058883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.058916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.033 [2024-11-28 08:26:48.059045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.033 [2024-11-28 08:26:48.059080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.033 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.059261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.059294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.059474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.059507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.059642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.059659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.059823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.059839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.060070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.060086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.060243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.060259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.060361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.060377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.060530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.060545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.060693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.060709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.060897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.060935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.061208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.061242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.061441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.061474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.061651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.061683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.061801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.061841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.062060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.062087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.062325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.062342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.062552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.062568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.062720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.062736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.062930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.062971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.063094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.063127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.063260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.063293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.063566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.063599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.063727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.063760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.063968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.064013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.064249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.064265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.064423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.064439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.064545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.064561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.064645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.064659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.064888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.064904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.065050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.034 [2024-11-28 08:26:48.065067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.034 qpair failed and we were unable to recover it. 00:28:06.034 [2024-11-28 08:26:48.065279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.065313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.065498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.065531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.065639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.065671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.065892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.065925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.066140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.066173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.066388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.066421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.066620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.066636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.066795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.066827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.067024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.067059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.067241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.067273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.067394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.067426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.067553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.067586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.067691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.067723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.067905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.067938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.068192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.068225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.068354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.068386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.068627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.068660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.068922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.068965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.069087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.069121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.069248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.069286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.069529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.069563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.069751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.069785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.070039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.070056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.070148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.070163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.070333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.070365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.070483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.070516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.070650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.070684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.070827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.070860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.071051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.071069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.035 [2024-11-28 08:26:48.071170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.035 [2024-11-28 08:26:48.071186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.035 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.071278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.071294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.071369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.071384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.071576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.071609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.071799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.071832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.071965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.071999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.072197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.072230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.072352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.072386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.072631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.072664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.072791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.072829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.072912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.072926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.073006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.073021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.073174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.073212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.073436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.073455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.073640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.073673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.073966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.074002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.074194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.074227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.074452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.074484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.074629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.074645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.074819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.074850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.075080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.075115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.075303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.075336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.075531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.075563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.075781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.075813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.075941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.075961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.076055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.076070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.076220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.076236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.076413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.076429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.076518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.076533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.076632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.036 [2024-11-28 08:26:48.076646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.036 qpair failed and we were unable to recover it. 00:28:06.036 [2024-11-28 08:26:48.076798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.076836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.077062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.077097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.077274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.077306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.077509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.077541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.077677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.077709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.077822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.077853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.078019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.078036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.078227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.078243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.078331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.078346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.078437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.078451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.078557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.078574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.078806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.078821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.078978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.078995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.079086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.079115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.079373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.079407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.079541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.079573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.079712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.079745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.079885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.079918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.080139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.080176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.080469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.080507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.080755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.080788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.080915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.080963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.081137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.037 [2024-11-28 08:26:48.081153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.037 qpair failed and we were unable to recover it. 00:28:06.037 [2024-11-28 08:26:48.081241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.081256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.081345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.081360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.081444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.081458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.081594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.081609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.081770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.081803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.082001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.082036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.082252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.082285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.082499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.082533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.082732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.082765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.082899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.082933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.083155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.083189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.083445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.083479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.083602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.083634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.083812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.083846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.083975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.083992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.084098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.084114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.084264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.084280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.084364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.084402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.084612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.084644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.084782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.084817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.085016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.085051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.085316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.085348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.085638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.085671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.085860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.085877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.086033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.086067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.086209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.086242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.086381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.086414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.086539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.086572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.086756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.086789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.086914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.038 [2024-11-28 08:26:48.086931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.038 qpair failed and we were unable to recover it. 00:28:06.038 [2024-11-28 08:26:48.087159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.087193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.087378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.087417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.087557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.087590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.087699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.087715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.087864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.087881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.087960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.087975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.088064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.088080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.088178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.088210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.088456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.088489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.088733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.088765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.088900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.088933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.089130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.089164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.089281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.089315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.089563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.089595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.089737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.089770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.090001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.090018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.090121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.090137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.090318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.090334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.090476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.090510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.090631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.090664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.090873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.090905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.091051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.091068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.091221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.091237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.091400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.091416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.091511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.091526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.091684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.091700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.091856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.091873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.092033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.092050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.092161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.092177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.039 [2024-11-28 08:26:48.092323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.039 [2024-11-28 08:26:48.092339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.039 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.092422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.092437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.092587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.092604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.092752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.092768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.092981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.092998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.093237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.093271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.093405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.093439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.093565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.093598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.093856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.093888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.094096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.094131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.094270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.094301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.094483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.094515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.094644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.094677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.094867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.094885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.094968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.094984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.095190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.095206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.095411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.095427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.095586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.095603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.095779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.095795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.095956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.095972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.096132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.096166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.096358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.096389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.096575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.096608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.096743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.096760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.096933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.096974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.097112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.097144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.097327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.097359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.097513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.097546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.097737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.097769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.040 qpair failed and we were unable to recover it. 00:28:06.040 [2024-11-28 08:26:48.097892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.040 [2024-11-28 08:26:48.097925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.098058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.098092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.098351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.098384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.098608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.098643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.098820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.098836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.098993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.099010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.099098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.099113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.099283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.099300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.099464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.099480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.099638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.099655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.099797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.099814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.099936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.099985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.100253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.100286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.100412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.100444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.100554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.100587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.100725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.100757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.100886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.100918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.101066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.101101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.101342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.101374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.101567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.101600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.101718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.101751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.101966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.102000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.102193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.102225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.102409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.102425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.102508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.102523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.102634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.102665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.102818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.102831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.102991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.103004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.103209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.103221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.103439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.103450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.103542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.041 [2024-11-28 08:26:48.103553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.041 qpair failed and we were unable to recover it. 00:28:06.041 [2024-11-28 08:26:48.103635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.103646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.103796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.103807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.103873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.103883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.104043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.104079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.104226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.104260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.104452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.104484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.104667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.104700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.104818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.104861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.104996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.105031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.105228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.105262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.105381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.105416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.105684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.105717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.105907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.105941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.106145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.106178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.106382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.106415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.106557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.106589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.106770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.106804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.106934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.106976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.107182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.107215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.107410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.107444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.107569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.107601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.107790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.107823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.108072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.108107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.108380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.108413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.108590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.108623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.108806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.108817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.108955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.108968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.109179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.109213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.042 [2024-11-28 08:26:48.109343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.042 [2024-11-28 08:26:48.109377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.042 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.109555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.109588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.109770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.109802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.109936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.109979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.110187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.110220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.110357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.110390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.110517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.110554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.110738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.110771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.110906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.110940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.111145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.111179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.111304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.111338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.111524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.111556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.111695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.111728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.111859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.111895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.111991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.112007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.112170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.112204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.112449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.112482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.112610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.112644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.112770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.112803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.113103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.113120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.113214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.113230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.113309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.113326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.113494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.113511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.113603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.113618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.113727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.113759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.113958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.113993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.114143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.114176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.043 [2024-11-28 08:26:48.114367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.043 [2024-11-28 08:26:48.114401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.043 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.114524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.114557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.114667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.114699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.114830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.114864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.115106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.115123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.115290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.115305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.115464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.115483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.115747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.115780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.115991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.116025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.116263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.116296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.116491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.116522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.116781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.116814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.116999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.117032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.117224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.117257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.117454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.117487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.117677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.117709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.117982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.118015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.118207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.118241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.118433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.118466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.118662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.118695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.118812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.118846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.118963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.118998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.119255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.119272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.119360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.119375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.119535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.119552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.119651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.119666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.119809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.119824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.119978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.119995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.120077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.120112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.120243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.120278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.120419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.120451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.044 [2024-11-28 08:26:48.120561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.044 [2024-11-28 08:26:48.120594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.044 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.120801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.120833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.121035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.121058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.121318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.121363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.121485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.121517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.121644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.121676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.121926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.121968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.122168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.122202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.122399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.122430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.122627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.122660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.122844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.122878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.123005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.123039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.123239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.123255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.123348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.123380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.123502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.123534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.123667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.123700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.123996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.124069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.124208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.124244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.124384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.124418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.124603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.124636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.124760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.124772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.124915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.124927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.125014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.125026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.125106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.125117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.125264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.125276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.125533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.125568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.125764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.125797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.125989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.126024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.126232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.126244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.126422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.045 [2024-11-28 08:26:48.126464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.045 qpair failed and we were unable to recover it. 00:28:06.045 [2024-11-28 08:26:48.126614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.126647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.126775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.126809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.126939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.126987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.127067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.127078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.127164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.127175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.127354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.127388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.127600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.127634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.127820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.127853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.128069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.128109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.128316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.128350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.128549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.128584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.128704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.128737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.128855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.128889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.129100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99fb20 is same with the state(6) to be set 00:28:06.046 [2024-11-28 08:26:48.129311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.129330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.129478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.129494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.129593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.129626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.129759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.129792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.129907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.129939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.130156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.130189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.130322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.130355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.130538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.130570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.130756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.130772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.130871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.130884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.130964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.130977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.131054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.131065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.131142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.131175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.131367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.131401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.131605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.131638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.131851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.046 [2024-11-28 08:26:48.131884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.046 qpair failed and we were unable to recover it. 00:28:06.046 [2024-11-28 08:26:48.132020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.132054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.132215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.132248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.132382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.132415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.132691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.132725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.132902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.132940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.133076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.133089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.133232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.133264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.133402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.133435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.133682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.133716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.133873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.133885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.133986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.134066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.134375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.134411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.134685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.134727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.134868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.134884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.135033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.135049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.135269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.135302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.135424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.135457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.135643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.135676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.135912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.135944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.136201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.136234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.136421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.136453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.136641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.136673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.136810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.136844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.137049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.137065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.137267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.137299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.137485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.137517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.137712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.137745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.137856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.137872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.138066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.138098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.138304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.047 [2024-11-28 08:26:48.138336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.047 qpair failed and we were unable to recover it. 00:28:06.047 [2024-11-28 08:26:48.138592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.138624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.138881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.138897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.139101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.139118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.139281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.139296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.139374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.139389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.139490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.139529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.139721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.139754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.139989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.140039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.140140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.140156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.140249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.140264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.140356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.140373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.140521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.140536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.140628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.140661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.140802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.140835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.140960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.140994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.141137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.141169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.141349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.141383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.141493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.141525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.141772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.141806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.142048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.142066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.142220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.142236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.142327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.142342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.142542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.142581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.142705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.142738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.142869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.142902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.143097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.143131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.143382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.143417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.048 qpair failed and we were unable to recover it. 00:28:06.048 [2024-11-28 08:26:48.143603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.048 [2024-11-28 08:26:48.143636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.143822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.143856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.143969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.144005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.144118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.144151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.144271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.144305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.144442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.144476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.144675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.144708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.144905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.144945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.145152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.145186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.145321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.145353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.145554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.145587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.145691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.145703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.145792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.145803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.145869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.145880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.145968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.145981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.146208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.146243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.146417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.146449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.146570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.146604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.146793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.146827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.146972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.147007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.147211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.147243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.147442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.147476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.147662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.147697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.147965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.148000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.148152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.148183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.148316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.148348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.049 [2024-11-28 08:26:48.148535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.049 [2024-11-28 08:26:48.148567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.049 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.148840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.148875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.149124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.149158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.149345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.149379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.149507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.149540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.149663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.149696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.149825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.149859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.150109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.150145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.150389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.150402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.150491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.150523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.150700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.150733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.150990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.151003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.151069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.151079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.151161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.151172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.151238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.151250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.151427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.151459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.151742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.151774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.151889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.151900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.152045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.152058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.152161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.152171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.152417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.152451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.152653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.152691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.152830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.152863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.152994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.153025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.153110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.153121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.153297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.153331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.153512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.153544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.153730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.153762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.153893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.153926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.050 qpair failed and we were unable to recover it. 00:28:06.050 [2024-11-28 08:26:48.154225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.050 [2024-11-28 08:26:48.154259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.154467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.154500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.154627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.154661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.154837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.154870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.155057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.155091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.155224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.155257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.155468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.155503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.155709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.155743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.155921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.155934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.156018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.156029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.156123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.156135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.156327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.156362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.156491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.156523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.156716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.156750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.156883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.156896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.157032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.157044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.157264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.157298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.157432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.157465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.157590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.157622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.157837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.157871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.158021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.158033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.158167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.158181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.158314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.158327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.158476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.158508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.158696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.158730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.158858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.158891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.159164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.159178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.159349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.159362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.159507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.159520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.051 [2024-11-28 08:26:48.159655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.051 [2024-11-28 08:26:48.159687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.051 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.159812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.159844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.160036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.160070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.160252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.160289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.160407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.160420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.160589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.160601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.160745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.160756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.160911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.160924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.161135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.161148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.161288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.161319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.161444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.161478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.161700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.161732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.161867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.161898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.162102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.162115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.162330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.162362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.162494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.162527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.162650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.162682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.162895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.162928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.163139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.163172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.163379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.163416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.163620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.163652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.163862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.163874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.164079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.164092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.164241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.164253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.164426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.164438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.164640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.164675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.164847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.164860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.165032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.165067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.165329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.165363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.165488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.165520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.165656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.165690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.165870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.165901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.052 [2024-11-28 08:26:48.166055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.052 [2024-11-28 08:26:48.166078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.052 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.166174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.166186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.166318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.166331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.166493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.166526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.166705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.166738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.166929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.166974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.167080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.167092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.167160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.167171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.167233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.167244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.167323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.167334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.167472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.167484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.167567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.167581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.167656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.167666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.167813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.167846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.167994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.168028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.168210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.168243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.168377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.168410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.168525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.168557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.168764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.168797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.169069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.169107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.169326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.169358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.169547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.169581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.169797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.169834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.170045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.170058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.170243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.170278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.170475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.170508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.170692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.170727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.170921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.170965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.171083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.171113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.171194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.171205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.171292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.171303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.053 qpair failed and we were unable to recover it. 00:28:06.053 [2024-11-28 08:26:48.171456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.053 [2024-11-28 08:26:48.171470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.171629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.171661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.171912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.171960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.172171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.172204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.172314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.172345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.172524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.172558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.172745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.172778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.173020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.173096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.173201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.173218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.173388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.173422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.173613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.173646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.173843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.173876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.174013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.174031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.174180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.174195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.174334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.174350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.174443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.174459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.174553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.174570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.174640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.174655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.174754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.174787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.174918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.174963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.175144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.175177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.175293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.175309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.054 qpair failed and we were unable to recover it. 00:28:06.054 [2024-11-28 08:26:48.175448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.054 [2024-11-28 08:26:48.175463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.175545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.175561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.175745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.175776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.175888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.175921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.176204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.176238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.176399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.176415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.176582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.176614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.176850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.176883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.177010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.177045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.177276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.177308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.177480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.177496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.177595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.177628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.177769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.177807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.178079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.178114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.178252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.178269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.178361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.178377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.178462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.178477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.178720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.178753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.178931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.178972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.179113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.179146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.179267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.179284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.179545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.179577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.179760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.179793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.180035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.180052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.180173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.180190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.180266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.180280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.055 qpair failed and we were unable to recover it. 00:28:06.055 [2024-11-28 08:26:48.180370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.055 [2024-11-28 08:26:48.180386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.180577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.180609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.180748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.180781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.180953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.180971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.181076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.181092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.181265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.181307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.181485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.181517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.181697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.181730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.181932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.181977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.182090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.182124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.182250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.182284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.182493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.182526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.182667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.182699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.182825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.182864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.183044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.183077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.183206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.183239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.183457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.183491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.183676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.183709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.183892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.183926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.184066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.184100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.184241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.184274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.184395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.184428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.184556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.184589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.184790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.184823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.185021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.185054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.185177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.185192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.185437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.185469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.185646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.185720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.185925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.185939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.056 [2024-11-28 08:26:48.186126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.056 [2024-11-28 08:26:48.186159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.056 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.186296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.186328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.186452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.186484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.186731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.186763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.186883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.186916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.187071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.187111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.187307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.187340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.187526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.187559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.187842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.187875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.188144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.188179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.188368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.188401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.188529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.188572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.188774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.188806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.189017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.189052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.189195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.189228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.189399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.189415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.189500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.189515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.189625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.189640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.189839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.189872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.190055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.190090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.190293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.190325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.190465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.190498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.190694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.190727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.190852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.190886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.191083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.191119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.191317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.191335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.191497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.191529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.191742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.191773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.192049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.192065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.192144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.057 [2024-11-28 08:26:48.192159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.057 qpair failed and we were unable to recover it. 00:28:06.057 [2024-11-28 08:26:48.192264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.192280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.192378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.192392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.192481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.192495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.192654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.192687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.192810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.192842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.193091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.193125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.193313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.193346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.193467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.193500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.193609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.193623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.193775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.193787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.193980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.194013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.194140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.194173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.194369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.194401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.194611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.194645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.194764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.194797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.194978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.194991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.195217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.195250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.195391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.195424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.195555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.195586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.195777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.195809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.196056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.196090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.196200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.196215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.196297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.196307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.196461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.196493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.196764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.196797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.196926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.196969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.197101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.197113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.197258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.197269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.197433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.197446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.197544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.197577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.197738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.197810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.198080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.058 [2024-11-28 08:26:48.198119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.058 qpair failed and we were unable to recover it. 00:28:06.058 [2024-11-28 08:26:48.198207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.198223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.198385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.198430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.198612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.198644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.198782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.198814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.198945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.198988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.199167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.199184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.199277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.199320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.199480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.199513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.199759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.199792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.199984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.200000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.200241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.200275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.200413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.200446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.200621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.200656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.200777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.200809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.200932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.200958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.201115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.201161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.201405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.201443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.201628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.201663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.201909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.201941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.202199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.202234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.202504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.202540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.202679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.202712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.202967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.203002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.203146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.203163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.203376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.203393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.203502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.203518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.203758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.203790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.203974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.204009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.204206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.059 [2024-11-28 08:26:48.204242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.059 qpair failed and we were unable to recover it. 00:28:06.059 [2024-11-28 08:26:48.204419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.204436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.204584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.204600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.204779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.204817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.205053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.205086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.205221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.205254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.205446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.205457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.205542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.205580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.205774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.205806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.206004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.206039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.206227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.206251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.206322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.206334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.206408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.206418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.206570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.206582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.206722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.206756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.206935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.206988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.207186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.207219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.207352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.207384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.207510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.207544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.207669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.207703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.207907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.207938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.208137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.208169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.208365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.208397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.208527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.208561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.208751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.208784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.060 qpair failed and we were unable to recover it. 00:28:06.060 [2024-11-28 08:26:48.208922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.060 [2024-11-28 08:26:48.208934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.209032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.209045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.209201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.209235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.209524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.209558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.209751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.209786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.209979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.210014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.210130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.210163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.210311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.210344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.210470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.210504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.210703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.210736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.210876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.210910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.211063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.211106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.211277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.211289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.211398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.211410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.211595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.211629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.211759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.211793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.211996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.212030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.212173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.212210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.212397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.212430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.212608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.212641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.212820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.212854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.213035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.213052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.213227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.213272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.213400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.213435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.213559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.213592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.213727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.213761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.213966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.214000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.214265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.214301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.214498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.214531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.214725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.214758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.061 [2024-11-28 08:26:48.214888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-11-28 08:26:48.214929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.061 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.215040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.215056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.215229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.215245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.215408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.215442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.215652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.215684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.215927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.215975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.216138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.216155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.216252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.216268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.216375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.216408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.216527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.216560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.216705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.216738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.216940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.216985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.217202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.217235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.217412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.217445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.217665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.217705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.217976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.218011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.218098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.218113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.218206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.218221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.218312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.218345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.218537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.218571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.218773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.218806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.219005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.219040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.219175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.219192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.219404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.219422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.219645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.219677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.219873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.219907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.220053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.220088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.220290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.220307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.220487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.220505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.062 [2024-11-28 08:26:48.220594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-11-28 08:26:48.220610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.062 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.220721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.220737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.220894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.220927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.221143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.221178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.221323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.221355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.221500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.221533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.221742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.221775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.221971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.222013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.222162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.222178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.222329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.222345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.222422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.222438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.222599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.222632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.222817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.222851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.222974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.223008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.223220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.223254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.223370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.223387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.223526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.223542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.223623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.223639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.223777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.223794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.223940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.223962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.224180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.224197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.224275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.224290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.224526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.224542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.224695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.224711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.224835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.224909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.225195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.225267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.225529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.225602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.225802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.225839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.226063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.226099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.226221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.226254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.226468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.063 [2024-11-28 08:26:48.226483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.063 qpair failed and we were unable to recover it. 00:28:06.063 [2024-11-28 08:26:48.226643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.226675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.226815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.226848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.227051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.227086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.227400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.227433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.227578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.227610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.227739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.227772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.227891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.227924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.228112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.228129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.228370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.228402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.228530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.228564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.228811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.228845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.229022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.229040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.229146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.229181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.229365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.229399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.229533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.229564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.229742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.229775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.229881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.229914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.230052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.230087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.230229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.230272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.230457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.230475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.230696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.230729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.230852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.230886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.231020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.231060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.231174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.231191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.231379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.231411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.231552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.231585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.231702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.231735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.231914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.064 [2024-11-28 08:26:48.231960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.064 qpair failed and we were unable to recover it. 00:28:06.064 [2024-11-28 08:26:48.232139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.232156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.232311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.232345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.232615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.232648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.232777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.232810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.233063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.233098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.233291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.233326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.233515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.233547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.233666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.233700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.233890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.233924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.234107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.234124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.234283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.234317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.234459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.234493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.234679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.234711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.234901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.234934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.235084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.235118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.235308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.235341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.235465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.235497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.235762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.235796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.235989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.236006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.236222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.236257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.236376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.236408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.236664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.236697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.236908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.236941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.237106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.237140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.237320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.237336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.237426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.237457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.237666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.237699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.237846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.237878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.237997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.238042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.065 qpair failed and we were unable to recover it. 00:28:06.065 [2024-11-28 08:26:48.238187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.065 [2024-11-28 08:26:48.238203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.238433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.238466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.238712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.238744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.238991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.239025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.239298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.239331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.239530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.239564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.239750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.239784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.239905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.239938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.240144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.240177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.240363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.240378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.240522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.240538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.240683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.240699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.240854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.240870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.241048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.241065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.241173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.241189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.241281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.241296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.241404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.241442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.241707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.241742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.241866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.241899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.242101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.242136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.242389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.242421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.242605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.242637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.242820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.242853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.243104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.243149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.243237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.243252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.066 [2024-11-28 08:26:48.243400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.066 [2024-11-28 08:26:48.243418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.066 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.243626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.243642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.243736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.243751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.243903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.243920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.244028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.244044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.244202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.244219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.244299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.244314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.244390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.244406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.244541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.244579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.244702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.244736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.244858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.244887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.245090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.245124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.245338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.245354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.245442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.245457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.245534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.245550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.245631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.245646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.245783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.245799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.245998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.246015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.246103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.246119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.246208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.246224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.246406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.246439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.246649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.246682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.246811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.246844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.247025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.247043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.247272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.247288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.247388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.247404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.247548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.247565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.247724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.247740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.247825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.247841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.247993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.248010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.248083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.067 [2024-11-28 08:26:48.248098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.067 qpair failed and we were unable to recover it. 00:28:06.067 [2024-11-28 08:26:48.248194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.068 [2024-11-28 08:26:48.248210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.068 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.248357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.248373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.248462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.248477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.248571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.248586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.248728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.248744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.248846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.248861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.248933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.248961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.249104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.249121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.249196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.249211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.249382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.249398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.249483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.249499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.249601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.249617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.249769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.249787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.249951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.249968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.250128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.250144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.250220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.250236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.250321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.250336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.250487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.250503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.250690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.356 [2024-11-28 08:26:48.250709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.356 qpair failed and we were unable to recover it. 00:28:06.356 [2024-11-28 08:26:48.250782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.250797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.250971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.250988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.251220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.251236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.251400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.251416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.251496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.251511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.251652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.251667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.251876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.251892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.251993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.252009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.252112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.252128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.252230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.252247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.252408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.252425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.252525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.252543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.252688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.252704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.252808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.252824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.252977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.252994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.253139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.253155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.253243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.253258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.253432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.253447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.253523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.253539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.253676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.253691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.253846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.253863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.253964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.253980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.254064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.254080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.254170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.254186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.254346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.254362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.254508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.254523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.254686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.254704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.254780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.254829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.255026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.255060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.255186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.357 [2024-11-28 08:26:48.255219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.357 qpair failed and we were unable to recover it. 00:28:06.357 [2024-11-28 08:26:48.255347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.255386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.255530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.255545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.255637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.255652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.255800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.255816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.255887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.255903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.256053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.256087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.256203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.256236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.256484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.256517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.256727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.256761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.257023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.257058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.257298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.257369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.257575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.257612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.257812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.257846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.257992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.258028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.258164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.258198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.258441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.258474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.258600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.258632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.258902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.258934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.259139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.259173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.259378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.259411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.259635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.259668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.259783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.259815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.259994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.260027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.260221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.260264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.260401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.260418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.260492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.260506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.260662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.260695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.260933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.358 [2024-11-28 08:26:48.260972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.358 qpair failed and we were unable to recover it. 00:28:06.358 [2024-11-28 08:26:48.261239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.261272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.261519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.261551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.261751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.261782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.261989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.262023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.262208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.262250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.262349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.262365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.262505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.262521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.262671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.262687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.262917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.262956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.263160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.263192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.263399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.263440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.263605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.263620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.263836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.263852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.263940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.263963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.264115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.264131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.264305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.264338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.264530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.264563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.264830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.264862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.265087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.265122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.265323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.265339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.265490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.265523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.265711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.265744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.265968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.266003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.266219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.266234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.266361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.266393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.266653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.266685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.266806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.266839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.266977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.267014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.359 [2024-11-28 08:26:48.267202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.359 [2024-11-28 08:26:48.267217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.359 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.267366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.267399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.267616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.267648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.267908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.267940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.268192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.268225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.268334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.268350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.268436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.268452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.268601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.268640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.268761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.268793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.268984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.269017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.269191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.269206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.269381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.269413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.269685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.269719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.269903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.269936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.270200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.270234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.270423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.270457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.270630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.270647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.270814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.270851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.271044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.271078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.271197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.271230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.271414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.271459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.271542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.271558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.271710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.271739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.271965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.272000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.272240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.272256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.272415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.272447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.272638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.272672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.272885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.272917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.273170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.273201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.273383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.360 [2024-11-28 08:26:48.273401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.360 qpair failed and we were unable to recover it. 00:28:06.360 [2024-11-28 08:26:48.273510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.273526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.273611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.273627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.273748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.273765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.273849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.273863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.273936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.273969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.274111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.274129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.274318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.274350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.274620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.274654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.274866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.274899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.275051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.275068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.275278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.275294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.275409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.275443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.275577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.275610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.275800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.275833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.276084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.276119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.276289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.276305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.276393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.276428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.276557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.276591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.276828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.276861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.277008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.277039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.277144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.277159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.277322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.277338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.277496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.277511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.277592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.277607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.277708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.277719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.277799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.277810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.277907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.277918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.278001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.278013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.278077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.278089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.278299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.278333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.361 [2024-11-28 08:26:48.278444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.361 [2024-11-28 08:26:48.278483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.361 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.278706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.278751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.278957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.278991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.279112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.279128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.279286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.279302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.279446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.279461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.279663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.279697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.279931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.279971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.280106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.280139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.280445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.280480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.280688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.280722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.280863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.280896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.281031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.281065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.281258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.281290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.281506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.281539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.281672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.281705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.281888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.281923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.282146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.282181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.282380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.282413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.282542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.282575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.282703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.282737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.282914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.282960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.283141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.362 [2024-11-28 08:26:48.283177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.362 qpair failed and we were unable to recover it. 00:28:06.362 [2024-11-28 08:26:48.283311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.283345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.283588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.283623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.283745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.283777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.284026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.284062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.284334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.284368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.284561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.284596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.284843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.284876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.285060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.285094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.285276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.285322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.285475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.285491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.285571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.285586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.285753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.285769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.285875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.285915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.286189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.286262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.286554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.286598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.286803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.286837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.287114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.287165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.287305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.287317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.287484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.287527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.287744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.287779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.288053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.288096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.288179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.288191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.288350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.288363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.288447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.288459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.288555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.288591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.288789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.288823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.289022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.289063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.289259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.289271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.289435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.289469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.363 [2024-11-28 08:26:48.289668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.363 [2024-11-28 08:26:48.289702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.363 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.289931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.290025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.290267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.290304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.290497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.290513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.290774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.290808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.291048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.291082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.291259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.291292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.291490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.291525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.291715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.291748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.291856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.291889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.292088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.292122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.292323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.292355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.292494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.292525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.292741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.292773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.292969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.293004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.293195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.293228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.293379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.293413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.293613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.293646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.293875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.293907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.294201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.294237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.294484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.294522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.294728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.294761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.294964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.294998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.295122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.295156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.295297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.295331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.295534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.295552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.295712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.295745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.295945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.295995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.296201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.296245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.296336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.364 [2024-11-28 08:26:48.296355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.364 qpair failed and we were unable to recover it. 00:28:06.364 [2024-11-28 08:26:48.296530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.296562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.296760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.296794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.296912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.296945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.297205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.297237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.297366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.297399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.297586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.297619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.297742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.297775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.297991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.298025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.298208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.298242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.298360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.298393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.298567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.298599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.298727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.298760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.298966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.299000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.299189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.299222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.299404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.299420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.299587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.299604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.299765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.299797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.299910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.299943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.300148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.300181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.300357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.300389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.300575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.300591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.300684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.300728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.300871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.300903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.301104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.301138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.301331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.301348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.301522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.301538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.301697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.301714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.301874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.365 [2024-11-28 08:26:48.301890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.365 qpair failed and we were unable to recover it. 00:28:06.365 [2024-11-28 08:26:48.301976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.301992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.302083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.302098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.302263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.302296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.302495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.302527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.302796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.302829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.302965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.303000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.303126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.303158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.303347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.303379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.303500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.303537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.303662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.303677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.303894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.303927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.304134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.304171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.304395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.304428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.304619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.304651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.304778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.304812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.305018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.305052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.305176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.305208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.305419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.305452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.305647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.305680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.305987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.306023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.306144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.306177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.306301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.306335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.306547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.306564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.306665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.306698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.306874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.306907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.307115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.307150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.307297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.307313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.307404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.307419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.307576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.307592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.366 [2024-11-28 08:26:48.307703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.366 [2024-11-28 08:26:48.307735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.366 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.307868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.307901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.308098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.308133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.308317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.308333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.308489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.308521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.308649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.308681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.308791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.308824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.309029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.309063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.309190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.309223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.309401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.309474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.309649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.309679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.309841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.309879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.310069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.310105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.310297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.310331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.310591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.310628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.310871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.310889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.311121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.311138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.311310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.311325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.311425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.311441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.311517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.311532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.311623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.311635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.311789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.311824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.311967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.312012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.312206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.312218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.312424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.312456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.312572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.312604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.312845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.312879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.313064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.313102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.313308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.313341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.313520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.367 [2024-11-28 08:26:48.313553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.367 qpair failed and we were unable to recover it. 00:28:06.367 [2024-11-28 08:26:48.313748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.313781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.314009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.314043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.314160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.314172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.314318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.314331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.314465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.314478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.314644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.314657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.314797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.314809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.314959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.314973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.315092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.315126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.315259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.315292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.315491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.315524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.315660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.315695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.315938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.315985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.316176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.316210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.316477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.316516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.316670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.316682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.316904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.316937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.317063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.317096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.317310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.317344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.317599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.317637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.317825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.317843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.317990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.318023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.318151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.318184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.368 qpair failed and we were unable to recover it. 00:28:06.368 [2024-11-28 08:26:48.318328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.368 [2024-11-28 08:26:48.318362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.318482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.318498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.318639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.318655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.318758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.318776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.318939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.318960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.319120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.319136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.319234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.319251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.319415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.319447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.319562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.319595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.319777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.319815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.320002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.320036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.320232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.320265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.320395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.320427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.320546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.320579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.320707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.320742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.320964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.320999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.321183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.321216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.321342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.321374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.321517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.321551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.321683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.321715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.321828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.321862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.322002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.322035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.322305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.322339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.322598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.322614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.322792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.322808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.322923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.322965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.323102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.323136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.323266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.323299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.323412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.323452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.323543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.323558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.369 qpair failed and we were unable to recover it. 00:28:06.369 [2024-11-28 08:26:48.323699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.369 [2024-11-28 08:26:48.323733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.323980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.324014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.324198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.324230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.324482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.324498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.324604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.324620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.324768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.324784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.325059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.325098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.325257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.325275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.325470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.325508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.325698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.325732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.325913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.325946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.326152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.326186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.326384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.326416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.326560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.326592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.326719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.326751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.326976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.327012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.327139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.327151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.327306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.327318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.327515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.327527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.327666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.327681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.327761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.327773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.327853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.327865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.328003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.328016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.328277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.328310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.328577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.328610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.328757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.328791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.328930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.328972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.329155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.329187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.329375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.370 [2024-11-28 08:26:48.329387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.370 qpair failed and we were unable to recover it. 00:28:06.370 [2024-11-28 08:26:48.329542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.329575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.329765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.329799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.329994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.330029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.330212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.330248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.330378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.330411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.330619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.330653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.330764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.330797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.331072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.331108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.331362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.331396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.331541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.331575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.331703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.331736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.331920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.331965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.332155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.332188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.332367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.332401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.332518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.332550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.332734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.332766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.332969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.333004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.333210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.333244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.333416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.333429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.333596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.333629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.333758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.333790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.333982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.334017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.334274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.334307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.334408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.334422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.334557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.334570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.334649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.334661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.334736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.334747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.334904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.334939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.335091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.335124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.371 [2024-11-28 08:26:48.335262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.371 [2024-11-28 08:26:48.335294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.371 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.335514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.335530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.335751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.335765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.335831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.335843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.335913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.335925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.336019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.336031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.336115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.336126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.336203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.336215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.336293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.336305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.336445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.336456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.336670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.336683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.336748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.336759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.336842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.336853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.336985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.336997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.337093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.337104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.337190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.337202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.337293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.337333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.337535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.337569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.337766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.337802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.337928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.337975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.338159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.338193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.338390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.338423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.338544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.338585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.338715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.338750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.338923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.338978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.339234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.339268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.339407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.339440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.339653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.339688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.339895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.339932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.372 qpair failed and we were unable to recover it. 00:28:06.372 [2024-11-28 08:26:48.340170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-11-28 08:26:48.340204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.340412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.340448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.340571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.340605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.340781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.340793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.340862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.340873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.340958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.340969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.341065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.341078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.341247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.341259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.341401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.341439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.341568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.341602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.341838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.341875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.342061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.342095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.342312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.342351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.342533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.342545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.342720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.342733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.342821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.342851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.343035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.343076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.343276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.343310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.343476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.343488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.343733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.343767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.343907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.343944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.344209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.344245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.344371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.344404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.344593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.344634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.344787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.344799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.344975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.345011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.345149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.345185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.345367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.345401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.345595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.345608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.373 [2024-11-28 08:26:48.345683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-11-28 08:26:48.345694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.373 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.345979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.346014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.346142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.346174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.346380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.346414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.346610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.346656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.346887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.346920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.347094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.347131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.347279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.347313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.347507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.347539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.347722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.347734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.347871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.347884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.348042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.348055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.348160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.348196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.348341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.348377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.348505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.348546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.348739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.348751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.348883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.348896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.349031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.349044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.349231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.349266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.349567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.349604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.349728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.349766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.349977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.350013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.350211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.350244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.350374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.350408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.350609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-11-28 08:26:48.350645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.374 qpair failed and we were unable to recover it. 00:28:06.374 [2024-11-28 08:26:48.350859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.350895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.351147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.351186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.351373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.351385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.351475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.351486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.351618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.351650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.351774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.351807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.352003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.352040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.352164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.352204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.352388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.352420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.352539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.352573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.352710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.352742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.352937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.352981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.353123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.353156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.353401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.353436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.353691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.353703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.353784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.353795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.353936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.353953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.354021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.354032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.354180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.354193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.354345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.354379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.354506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.354538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.354730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.354762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.354884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.354918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.355053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.355087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.355373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.355407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.355631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.355670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.355870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.355903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.356039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.356074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.356267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.356299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.356425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.375 [2024-11-28 08:26:48.356459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.375 qpair failed and we were unable to recover it. 00:28:06.375 [2024-11-28 08:26:48.356637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.356671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.356848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.356881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.357127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.357161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.357288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.357299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.357469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.357481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.357557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.357568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.357644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.357655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.357785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.357795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.357939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.357983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.358110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.358143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.358274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.358308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.358443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.358476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.358590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.358623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.358755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.358789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.358911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.358943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.359092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.359104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.359277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.359308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.359488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.359520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.359722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.359755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.359983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.360019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.360269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.360302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.360520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.360554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.360743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.360756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.360936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.360982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.361101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.361134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.361327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.361361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.361552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.361586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.361728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.376 [2024-11-28 08:26:48.361762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.376 qpair failed and we were unable to recover it. 00:28:06.376 [2024-11-28 08:26:48.361903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.361937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.362069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.362102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.362294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.362327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.362451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.362485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.362658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.362670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.362754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.362765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.362846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.362856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.362987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.363002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.363186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.363219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.363331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.363364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.363588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.363620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.363743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.363755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.363828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.363839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.363921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.363932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.364092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.364104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.364220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.364254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.364502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.364536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.364814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.364848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.364994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.365028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.365221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.365255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.365512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.365525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.365610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.365621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.365757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.365770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.365860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.365871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.366077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.366090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.366177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.366188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.366391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.366404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.366493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.366504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.366613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.377 [2024-11-28 08:26:48.366645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.377 qpair failed and we were unable to recover it. 00:28:06.377 [2024-11-28 08:26:48.366762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.366795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.366917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.366981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.367258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.367270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.367414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.367427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.367575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.367588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.367660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.367671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.367888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.367922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.368076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.368110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.368301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.368334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.368592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.368625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.368901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.368935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.369154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.369188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.369387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.369399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.369601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.369634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.369816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.369849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.370046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.370081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.370219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.370251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.370448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.370484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.370793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.370831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.371037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.371071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.371204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.371237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.371505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.371539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.371789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.371823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.372071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.372106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.372304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.372337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.372473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.372507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.372704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.372716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.372918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.372958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.373093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.373125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.373257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.378 [2024-11-28 08:26:48.373291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.378 qpair failed and we were unable to recover it. 00:28:06.378 [2024-11-28 08:26:48.373549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.373582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.373768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.373801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.374081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.374139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.374277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.374311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.374448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.374480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.374601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.374613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.374684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.374695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.374843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.374877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.375000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.375034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.375286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.375319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.375435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.375447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.375527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.375538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.375620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.375665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.375778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.375811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.376022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.376057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.376265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.376298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.376431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.376463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.376592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.376626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.376833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.376867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.377059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.377094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.377291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.377325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.377512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.377544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.377722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.377756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.377955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.377990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.378101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.378133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.378252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.378293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.378378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.378389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.378469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.378480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.378582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.378618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.379 qpair failed and we were unable to recover it. 00:28:06.379 [2024-11-28 08:26:48.378766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.379 [2024-11-28 08:26:48.378800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.378938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.379009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.379205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.379238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.379359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.379391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.379586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.379620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.379788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.379801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.379880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.379891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.380048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.380062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.380182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.380215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.380328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.380361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.380543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.380575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.380687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.380719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.380985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.381020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.381232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.381275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.381421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.381432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.381561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.381593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.381721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.381755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.381965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.382000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.382190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.382223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.382423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.382455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.382583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.382617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.382869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.382903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.383111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.383146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.383277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.383312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.380 [2024-11-28 08:26:48.383444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.380 [2024-11-28 08:26:48.383456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.380 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.383696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.383731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.383932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.383987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.384188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.384221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.384351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.384384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.384664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.384697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.384910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.384944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.385200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.385233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.385394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.385408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.385626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.385661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.385790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.385825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.386036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.386071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.386201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.386235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.386426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.386460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.386659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.386692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.386832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.386872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.387059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.387094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.387274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.387306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.387431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.387464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.387645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.387656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.387804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.387816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.387900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.387911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.388072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.388085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.388330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.388363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.388501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.388535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.388714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.388747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.388961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.388995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.389173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.389206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.389324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.389337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.389402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.389414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.381 [2024-11-28 08:26:48.389565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.381 [2024-11-28 08:26:48.389578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.381 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.389664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.389675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.389844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.389876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.389996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.390032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.390150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.390182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.390312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.390345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.390534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.390547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.390628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.390653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.390859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.390893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.391093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.391128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.391268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.391301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.391494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.391528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.391643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.391655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.391806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.391818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.391899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.391932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.392065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.392099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.392343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.392377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.392641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.392674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.392834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.392868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.392991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.393026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.393295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.393328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.393512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.393525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.393601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.393613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.393759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.393771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.393854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.393866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.394020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.394035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.394235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.394247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.394324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.394336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.394426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.394437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.382 [2024-11-28 08:26:48.394500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.382 [2024-11-28 08:26:48.394511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.382 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.394653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.394687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.394862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.394895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.395030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.395066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.395257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.395290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.395543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.395577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.395791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.395824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.395957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.395992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.396172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.396205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.396387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.396420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.396562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.396574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.396716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.396728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.396935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.396980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.397204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.397237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.397440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.397473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.397608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.397620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.397760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.397772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.397975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.397988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.398123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.398135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.398217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.398229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.398314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.398326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.398421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.398433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.398563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.398576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.398740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.398779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.398974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.399009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.399130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.399165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.399357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.399391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.399568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.399601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.383 [2024-11-28 08:26:48.399713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.383 [2024-11-28 08:26:48.399746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.383 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.399986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.399998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.400149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.400163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.400390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.400402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.400586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.400620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.400749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.400784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.400919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.400964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.401098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.401132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.401264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.401315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.401501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.401534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.401770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.401782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.401930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.401942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.402098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.402112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.402183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.402195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.402364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.402378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.402536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.402548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.402729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.402761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.402960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.402996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.403212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.403245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.403452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.403465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.403632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.403656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.403843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.403876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.404011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.404048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.404194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.404229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.404404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.404438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.404684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.404696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.404781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.404792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.404942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.404982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.405182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.405217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.405342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.405355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.405580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.405614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.384 [2024-11-28 08:26:48.405797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.384 [2024-11-28 08:26:48.405830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.384 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.406007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.406042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.406183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.406217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.406426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.406463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.406670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.406707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.406898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.406911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.407057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.407070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.407220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.407252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.407444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.407477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.407668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.407707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.407841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.407853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.407937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.407953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.408064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.408077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.408172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.408184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.408337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.408370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.408645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.408678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.408870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.408903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.409056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.409099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.409324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.409359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.409509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.409540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.409683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.409693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.409893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.409903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.409989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.410000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.410176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.410185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.410448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.410478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.410593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.410625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.410752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.410778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.410924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.410935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.411080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.411111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.385 [2024-11-28 08:26:48.411309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.385 [2024-11-28 08:26:48.411342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.385 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.411480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.411511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.411711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.411721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.411868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.411899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.412027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.412059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.412260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.412295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.412427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.412459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.412589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.412601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.412666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.412676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.412763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.412794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.412997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.413031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.413227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.413258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.413398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.413429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.413609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.413620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.413713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.413724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.413872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.413904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.414117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.414150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.414291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.414324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.414438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.414448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.414620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.414632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.414778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.414789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.414945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.414989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.415119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.415151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.415275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.415307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.415430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.415463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.415639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.415649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.415794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.415827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.386 qpair failed and we were unable to recover it. 00:28:06.386 [2024-11-28 08:26:48.416077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.386 [2024-11-28 08:26:48.416112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.416318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.416358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.416467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.416478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.416552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.416563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.416629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.416640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.416792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.416805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.416889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.416899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.417045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.417057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.417191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.417203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.417335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.417348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.417408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.417418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.417495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.417507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.417655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.417687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.417815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.417848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.417967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.418001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.418278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.418311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.418525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.418559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.418687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.418720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.418847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.418880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.419072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.419107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.419289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.419322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.419509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.419544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.419656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.419690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.419838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.419850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.420014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.420026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.420235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.420246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.420380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.420392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.420543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.420554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.420661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.387 [2024-11-28 08:26:48.420673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.387 qpair failed and we were unable to recover it. 00:28:06.387 [2024-11-28 08:26:48.420787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.420822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.420959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.420995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.421118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.421151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.421341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.421375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.421550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.421562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.421710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.421741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.422038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.422071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.422323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.422358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.422480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.422513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.422698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.422710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.422919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.422932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.423081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.423093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.423284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.423324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.423581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.423614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.423811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.423824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.423961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.423973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.424056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.424068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.424222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.424255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.424392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.424425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.424541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.424574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.424777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.424813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.425010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.425049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.425255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.425287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.425483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.425516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.425708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.425741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.425896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.425908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.425994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.426006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.426151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.426163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.426305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.388 [2024-11-28 08:26:48.426338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.388 qpair failed and we were unable to recover it. 00:28:06.388 [2024-11-28 08:26:48.426523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.426558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.426764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.426797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.426972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.426985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.427223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.427236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.427305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.427326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.427390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.427402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.427554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.427566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.427649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.427661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.427761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.427773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.427919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.427931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.428065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.428139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.428281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.428318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.428438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.428472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.428605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.428621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.428698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.428714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.428878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.428910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.429112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.429146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.429345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.429377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.429511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.429527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.429604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.429620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.429731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.429763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.429971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.430006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.430255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.430288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.430536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.430569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.430697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.430730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.430906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.430921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.431093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.431127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.431305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.431338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.431613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.431645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.389 [2024-11-28 08:26:48.431768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.389 [2024-11-28 08:26:48.431801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.389 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.431908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.431940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.432189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.432222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.432335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.432368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.432548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.432581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.432763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.432796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.432990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.433025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.433140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.433172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.433367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.433406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.433615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.433647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.433850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.433882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.434127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.434160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.434293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.434326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.434597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.434613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.434827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.434843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.434942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.434963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.435107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.435123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.435202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.435217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.435312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.435328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.435423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.435439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.435582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.435623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.435834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.435867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.436119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.436153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.436352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.436384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.436571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.436604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.436737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.436770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.436959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.436992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.437260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.437293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.437425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.437458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.437578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.437610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.390 qpair failed and we were unable to recover it. 00:28:06.390 [2024-11-28 08:26:48.437801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.390 [2024-11-28 08:26:48.437834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.438009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.438026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.438197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.438230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.438366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.438399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.438528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.438559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.438756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.438771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.438922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.438938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.439034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.439051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.439147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.439163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.439421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.439436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.439578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.439593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.439689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.439728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.439931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.439973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.440202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.440235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.440379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.440410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.440534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.440567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.440690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.440706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.440800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.440816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.441046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.441063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.441292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.441325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.441515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.441548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.441725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.441757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.441890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.441907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.442127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.442145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.442248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.442263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.442422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.442465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.391 [2024-11-28 08:26:48.442668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.391 [2024-11-28 08:26:48.442700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.391 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.442840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.442873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.443000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.443035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.443232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.443265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.443444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.443476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.443663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.443679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.443766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.443781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.443933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.443954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.444170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.444203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.444479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.444519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.444707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.444722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.444813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.444827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.444950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.444966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.445125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.445141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.445326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.445343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.445428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.445443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.445607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.445639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.445764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.445796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.445911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.445945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.446177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.446210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.446396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.446435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.446624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.446657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.446843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.446877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.447077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.447111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.447250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.447282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.447476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.447509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.447694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.447726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.392 [2024-11-28 08:26:48.447914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.392 [2024-11-28 08:26:48.447955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.392 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.448140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.448174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.448301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.448334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.448536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.448552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.448642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.448657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.448796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.448811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.448918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.448961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.449098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.449132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.449245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.449277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.449408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.449441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.449576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.449611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.449797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.449828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.450054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.450089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.450252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.450285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.450418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.450450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.450651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.450685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.450863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.450880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.450982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.450998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.451080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.451095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.451254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.451270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.451431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.451447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.451665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.451682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.451864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.451880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.452034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.452069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.452184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.452216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.452410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.452444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.452589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.452621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.452804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.452837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.452967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.453000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.453137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.393 [2024-11-28 08:26:48.453170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.393 qpair failed and we were unable to recover it. 00:28:06.393 [2024-11-28 08:26:48.453288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.453321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.453595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.453627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.453836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.453870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.453993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.454029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.454223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.454261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.454514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.454547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.454680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.454712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.454934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.454962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.455105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.455121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.455219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.455235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.455336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.455368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.455615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.455648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.455880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.455913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.456117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.456151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.456353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.456385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.456588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.456621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.456838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.456854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.456960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.456995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.457130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.457164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.457344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.457376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.457506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.457540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.457647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.457663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.457824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.457859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.457996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.458032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.458248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.458281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.458470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.458503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.458696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.458728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.458914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.458957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.459206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.459239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.459365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.394 [2024-11-28 08:26:48.459398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.394 qpair failed and we were unable to recover it. 00:28:06.394 [2024-11-28 08:26:48.459531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.459547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.459625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.459642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.459729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.459743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.459928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.459944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.460091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.460108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.460198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.460212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.460309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.460324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.460467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.460511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.460706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.460738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.460928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.460974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.461220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.461254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.461374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.461406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.461646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.461663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.461752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.461767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.462027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.462062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.462181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.462213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.462349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.462381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.462558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.462591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.462849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.462881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.462973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.462988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.463082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.463098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.463176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.463191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.463403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.463436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.463611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.463644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.463829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.463846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.464010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.464026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.464137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.464152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.464327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.464359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.464553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.464584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.464777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.395 [2024-11-28 08:26:48.464810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.395 qpair failed and we were unable to recover it. 00:28:06.395 [2024-11-28 08:26:48.465005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.465040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.465354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.465386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.465525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.465558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.465780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.465812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.465928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.465944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.466045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.466059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.466266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.466283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.466491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.466506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.466582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.466597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.466798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.466830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.467036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.467070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.467270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.467302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.467424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.467462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.467654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.467686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.467842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.467875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.468061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.468077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.468165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.468179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.468317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.468334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.468574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.468606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.468739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.468773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.468899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.468931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.469070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.469104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.469285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.469318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.469503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.469536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.469672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.469688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.469771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.469785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.469887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.469903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.470105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.470180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.396 qpair failed and we were unable to recover it. 00:28:06.396 [2024-11-28 08:26:48.470390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.396 [2024-11-28 08:26:48.470426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.470636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.470670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.470849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.470882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.471162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.471197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.471394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.471427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.471571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.471604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.471734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.471773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.471912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.471924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.472184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.472218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.472345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.472380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.472570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.472603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.472789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.472832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.472967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.473005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.473150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.473182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.473370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.473403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.473547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.473579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.473692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.473731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.473963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.473998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.474118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.474151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.474269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.474302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.474433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.474467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.474584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.474618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.474817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.474850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.475028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.475064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.475266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.475300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.475526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.475560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.397 [2024-11-28 08:26:48.475699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.397 [2024-11-28 08:26:48.475710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.397 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.475803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.475814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.475995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.476029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.476170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.476205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.476355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.476390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.476540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.476574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.476698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.476731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.476934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.476951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.477159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.477172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.477251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.477262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.477346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.477357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.477530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.477563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.477766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.477802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.477936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.477978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.478230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.478263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.478402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.478435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.478623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.478640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.478730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.478745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.478916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.478933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.479018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.479032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.479109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.479124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.479290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.479322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.479513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.479545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.479681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.479713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.479936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.479979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.480138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.480176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.480393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.480426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.480604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.480638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.480770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.480802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.398 [2024-11-28 08:26:48.480930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.398 [2024-11-28 08:26:48.480945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.398 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.481175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.481190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.481295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.481309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.481409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.481423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.481599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.481613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.481781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.481815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.482013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.482049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.482240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.482272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.482402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.482435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.482640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.482672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.482898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.482932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.483093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.483129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.483376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.483408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.483650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.483683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.483813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.483846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.484026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.484040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.484259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.484272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.484484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.484518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.484701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.484734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.484863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.484897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.485073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.485086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.485244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.485257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.485435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.485448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.485581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.485595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.485744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.485756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.485852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.485884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.486079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.486115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.486323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.486357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.486568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.486601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.486730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.486742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.486811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.399 [2024-11-28 08:26:48.486822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.399 qpair failed and we were unable to recover it. 00:28:06.399 [2024-11-28 08:26:48.486892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.486904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.486993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.487005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.487211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.487223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.487456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.487468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.487649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.487684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.487824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.487858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.488053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.488090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.488363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.488397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.488615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.488649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.488850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.488883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.489076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.489111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.489320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.489360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.489499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.489531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.489771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.489805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.489933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.489945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.490043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.490054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.490215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.490227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.490455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.490488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.490613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.490645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.490832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.490867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.490973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.490985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.491146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.491182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.491313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.491346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.491467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.491499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.491679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.491714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.491827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.491838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.491989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.492002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.492134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.492146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.492240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.492252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.492475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.492487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.492630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.492642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.492776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.400 [2024-11-28 08:26:48.492789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.400 qpair failed and we were unable to recover it. 00:28:06.400 [2024-11-28 08:26:48.492922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.492979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.493178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.493211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.493322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.493354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.493550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.493584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.493764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.493797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.494067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.494081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.494171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.494182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.494329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.494373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.494557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.494590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.494801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.494813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.494888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.494899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.495051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.495065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.495141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.495153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.495233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.495244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.495387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.495400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.495551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.495585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.495768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.495800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.495980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.496014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.496215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.496247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.496376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.496410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.496598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.496641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.496777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.496790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.496925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.496967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.497150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.497182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.497420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.497454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.497672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.497706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.497835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.497868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.498135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.498208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.498417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.498456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.498589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.498626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.401 [2024-11-28 08:26:48.498761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.401 [2024-11-28 08:26:48.498793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.401 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.498997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.499014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.499106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.499121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.499310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.499326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.499489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.499521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.499708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.499739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.499970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.500006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.500207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.500239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.500366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.500398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.500573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.500589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.500693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.500714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.500804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.500819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.500980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.500997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.501204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.501220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.501426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.501441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.501533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.501548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.501704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.501720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.501821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.501836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.501927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.501941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.502037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.502052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.502247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.502280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.502487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.502520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.502796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.502812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.502900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.502915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.503021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.503037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.503135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.503150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.503365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.503397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.503510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.503527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.503620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.503635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.503790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.503823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.503941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.503981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.504174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.504206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.402 [2024-11-28 08:26:48.504412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.402 [2024-11-28 08:26:48.504446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.402 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.504592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.504623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.504816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.504849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.505045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.505079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.505272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.505305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.505651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.505722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.505982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.506054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.506180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.506219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.506420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.506452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.506655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.506688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.506887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.506921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.507209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.507242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.507447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.507480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.507677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.507693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.507911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.507943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.508177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.508209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.508498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.508531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.508726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.508759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.509054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.509095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.509238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.509271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.509425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.509457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.509584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.509596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.509798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.509831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.510009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.510044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.510239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.510272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.510466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.510499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.510691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.403 [2024-11-28 08:26:48.510704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.403 qpair failed and we were unable to recover it. 00:28:06.403 [2024-11-28 08:26:48.510856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.510890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.511090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.511124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.511399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.511431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.511644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.511678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.511878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.511911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.512135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.512172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.512297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.512328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.512444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.512478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.512686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.512718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.512831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.512862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.513060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.513094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.513275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.513309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.513503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.513536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.513788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.513821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.513932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.513952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.514033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.514049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.514215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.514231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.514380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.514397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.514500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.514521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.514688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.514724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.514911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.514942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.515119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.515152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.515363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.515397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.515598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.515631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.515875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.515909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.516114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.516148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.516326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.516358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.516506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.516539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.516726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.516743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.516922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.516966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.517090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.517123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.517248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.517280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.517484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.517517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.517653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.404 [2024-11-28 08:26:48.517685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.404 qpair failed and we were unable to recover it. 00:28:06.404 [2024-11-28 08:26:48.517799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.517832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.518036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.518070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.518338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.518372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.518564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.518595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.518731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.518764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.518961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.518995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.519117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.519132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.519230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.519245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.519393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.519408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.519553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.519586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.519699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.519732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.519871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.519910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.520111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.520127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.520228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.520244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.520387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.520403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.520565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.520582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.520670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.520685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.520904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.520938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.521192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.521225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.521369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.521402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.521598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.521631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.521761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.521793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.521928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.521973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.522102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.522134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.522347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.522379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.522656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.522689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.522891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.522923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.523072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.523088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.523230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.523246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.523335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.523350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.523439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.523454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.523634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.523681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.523764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.523776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.524000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.524015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.524162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.524174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.524255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.405 [2024-11-28 08:26:48.524266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.405 qpair failed and we were unable to recover it. 00:28:06.405 [2024-11-28 08:26:48.524410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.524422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.524481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.524492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.524579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.524595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.524693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.524705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.524843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.524857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.524954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.524965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.525141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.525153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.525303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.525336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.525515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.525549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.525738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.525771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.525892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.525905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.525981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.525992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.526070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.526106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.526327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.526361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.526573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.526605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.526791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.526803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.526884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.526897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.526962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.526974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.527115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.527147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.527273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.527306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.527443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.527476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.527588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.527621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.527802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.527834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.528065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.528078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.528286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.528299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.528436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.528448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.528590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.528602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.528747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.528759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.528836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.528847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.528924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.528935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.529079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.529091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.529172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.529184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.529259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.529270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.529425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.529458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.529580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.529613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.529788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.529821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.530018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.406 [2024-11-28 08:26:48.530053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.406 qpair failed and we were unable to recover it. 00:28:06.406 [2024-11-28 08:26:48.530176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.530208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.530410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.530443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.530649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.530682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.530814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.530846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.531105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.531117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.531363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.531408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.531536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.531570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.531715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.531746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.531861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.531895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.532020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.532032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.532103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.532114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.532328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.532340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.532491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.532533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.532786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.532820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.532929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.532973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.533157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.533169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.533374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.533407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.533606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.533639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.533773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.533804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.533945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.533963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.534165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.534198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.534399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.534433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.534629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.534661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.534906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.534939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.535190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.535223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.535440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.535473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.535665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.535697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.535873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.535905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.536101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.536136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.536316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.536348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.536544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.536578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.536770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.536803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.536985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.537021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.537145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.537157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.537256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.537268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.537414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.537446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.537678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.537712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.537905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.537938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.538160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.407 [2024-11-28 08:26:48.538194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.407 qpair failed and we were unable to recover it. 00:28:06.407 [2024-11-28 08:26:48.538328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.538361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.538563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.538597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.538723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.538756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.539006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.539041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.539222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.539234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.539431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.539442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.539615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.539660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.539785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.539818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.540009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.540042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.540231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.540245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.540324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.540335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.540433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.540443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.540516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.540526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.540615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.540645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.540768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.540802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.540922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.540966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.541157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.541190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.541316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.541349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.541473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.541506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.541633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.541665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.541856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.541868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.541967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.541979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.542117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.542129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.542340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.542373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.542496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.542529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.542654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.542687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.542936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.542979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.543251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.543283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.543464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.543497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.543683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.543717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.543918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.543959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.408 qpair failed and we were unable to recover it. 00:28:06.408 [2024-11-28 08:26:48.544081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.408 [2024-11-28 08:26:48.544111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.544254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.544266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.544436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.544469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.544664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.544697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.544835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.544868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.545052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.545064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.545143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.545155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.545243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.545254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.545459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.545505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.545626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.545659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.545866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.545899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.546030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.546042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.546260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.546273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.546364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.546375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.546442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.546452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.546598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.546613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.546758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.546771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.546842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.546853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.546922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.546933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.547139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.547152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.547240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.547250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.547343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.547354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.547449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.547460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.547598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.547610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.547687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.547698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.547800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.547812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.547946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.547963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.548170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.548182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.548250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.548262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.548329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.548340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.548429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.548441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.548540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.548551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.548653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.548665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.548746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.548758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.548834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.548847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.548920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.548931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.549094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.549106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.549197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.409 [2024-11-28 08:26:48.549208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.409 qpair failed and we were unable to recover it. 00:28:06.409 [2024-11-28 08:26:48.549311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.549323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.549423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.549435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.549525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.549536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.549688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.549699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.549795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.549807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.549889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.549901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.549992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.550004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.550115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.550127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.550207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.550254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.550376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.550407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.550526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.550550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.550725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.550771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.550913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.550943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.551063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.551118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.551345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.551374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.551538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.551580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.551714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.551763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.552070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.552109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.552288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.552310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.552401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.552418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.552563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.552578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.552670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.552687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.552788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.552804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.552893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.552908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.553003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.553020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.553110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.553125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.553217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.553233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.553328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.553344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.553448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.553464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.553553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.553568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.553659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.553675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.553822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.553840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.553939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.553960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.554048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.554062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.554233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.554250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.554350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.554366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.410 [2024-11-28 08:26:48.554447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.410 [2024-11-28 08:26:48.554462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.410 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.554672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.554689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.554866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.554882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.554989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.555007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.555113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.555129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.555202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.555218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.555372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.555383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.555577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.555589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.555666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.555680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.555757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.555770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.555857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.555869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.556095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.556129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.556251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.556284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.556396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.556430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.556612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.556646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.556847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.556880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.557067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.557079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.557216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.557228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.557377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.557389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.557460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.557471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.557555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.557596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.557729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.557762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.557981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.558017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.558144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.558172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.558260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.558272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.558415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.558428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.558629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.558642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.558822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.558855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.558968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.559004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.559136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.559168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.559280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.559313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.559440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.559473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.559583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.559615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.559731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.559765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.559900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.559933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.560114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.560127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.560293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.560327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.411 [2024-11-28 08:26:48.560524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.411 [2024-11-28 08:26:48.560557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.411 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.560742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.560782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.560946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.560963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.561056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.561068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.561160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.561173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.561253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.561264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.561471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.561483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.561554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.561565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.561724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.561736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.561883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.561895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.562096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.562108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.562319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.562358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.562492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.562525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.562728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.562761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.562888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.562921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.563061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.563094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.563283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.563328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.563387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.563398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.563533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.563545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.563620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.563633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.563735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.563748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.563894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.563905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.564053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.564066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.564159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.564171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.564309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.564321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.564399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.564409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.564617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.564651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.564781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.564814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.565010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.565045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.565155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.565167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.565244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.565256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.565424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.565457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.565630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.565664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.565859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.565896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.566040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.566053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.566155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.566166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.566323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.566335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.566474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.566487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.566645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.412 [2024-11-28 08:26:48.566658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.412 qpair failed and we were unable to recover it. 00:28:06.412 [2024-11-28 08:26:48.566806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.566818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.566909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.566921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.567066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.567078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.567219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.567231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.567374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.567387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.567515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.567549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.567731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.567764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.567888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.567920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.568057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.568093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.568275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.568287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.568358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.568371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.568454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.568466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.568529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.568543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.568708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.568743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.568924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.568969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.569092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.569126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.569373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.569385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.569487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.569499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.569567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.569579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.569656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.569667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.569743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.569755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.569840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.569853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.569986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.569998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.570131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.570143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.570225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.570236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.570388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.570401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.570472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.570483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.413 qpair failed and we were unable to recover it. 00:28:06.413 [2024-11-28 08:26:48.570634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.413 [2024-11-28 08:26:48.570646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.570739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.570751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.570875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.570908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.571103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.571136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.571312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.571346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.571467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.571498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.571690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.571724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.571841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.571873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.572058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.572091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.572262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.572294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.572418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.572452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.572630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.572665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.572914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.572956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.573083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.573115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.573237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.573249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.573397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.573409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.573497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.573508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.573604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.573616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.573691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.573702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.573765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.573775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.573856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.573867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.573958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.573970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.574048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.574059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.574303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.574337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.574527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.574561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.574683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.574728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.574856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.574868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.574935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.574945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.575079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.575090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.575247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.575259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.575328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.575339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.575481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.575493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.575655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.575687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.575866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.575897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.576032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.576067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.576276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.576288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.576427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.576439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.576592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.414 [2024-11-28 08:26:48.576625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.414 qpair failed and we were unable to recover it. 00:28:06.414 [2024-11-28 08:26:48.576732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.576765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.576965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.577000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.577130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.577161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.577279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.577291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.577444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.577457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.577522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.577534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.577606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.577616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.577686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.577696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.577894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.577906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.578036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.578050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.578118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.578129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.578282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.578315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.578520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.578552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.578745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.578776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.579025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.579038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.579176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.579189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.579366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.579378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.579517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.579528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.579610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.579622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.579777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.579788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.579973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.580007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.580186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.580218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.580357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.580390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.580514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.580546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.580694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.580727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.580917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.580930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.581084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.581096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.581301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.581315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.581536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.581548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.581621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.581632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.581818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.581830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.581989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.582025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.582241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.582276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.582489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.582521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.582715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.582749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.582969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.583003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.583132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.583165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.415 qpair failed and we were unable to recover it. 00:28:06.415 [2024-11-28 08:26:48.583314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.415 [2024-11-28 08:26:48.583325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.583506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.583518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.583710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.583745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.583928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.583941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.584137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.584173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.584368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.584400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.584587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.584620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.584865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.584897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.585074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.585106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.585393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.585405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.585485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.585496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.585664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.585698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.585840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.585872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.586018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.586053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.586201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.586235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.586355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.586368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.586451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.586461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.586603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.586616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.586756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.586768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.586862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.586874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.586971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.586984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.587124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.587136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.587214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.587225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.587370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.587382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.587542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.587575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.587715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.587748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.588033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.588045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.588190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.588202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.588283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.588294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.588457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.588469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.588617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.588632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.588781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.588793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.588882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.588894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.589044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.589057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.589266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.589279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.589527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.589539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.589682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.589694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.589856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.589869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.416 [2024-11-28 08:26:48.589964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.416 [2024-11-28 08:26:48.589975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.416 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.590123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.590136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.590344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.590356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.590448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.590488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.590618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.590650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.590776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.590807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.590994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.591028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.591325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.591356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.591554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.591566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.591711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.591723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.591871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.591883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.592026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.592039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.592113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.592146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.592447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.592478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.592621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.592654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.592796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.592828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.592938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.592981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.593181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.593214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.593403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.593436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.593633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.593667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.417 [2024-11-28 08:26:48.593869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.417 [2024-11-28 08:26:48.593902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.417 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.594088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.594100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.594248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.594259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.594354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.594366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.594505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.594517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.594721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.594735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.594888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.594900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.595001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.595013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.595094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.595104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.595166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.595177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.595246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.595258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.595456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.595469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.595613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.595628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.595742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.595753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.595822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.595833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.595919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.595930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.596001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.596013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.596168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.596179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.596260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.596271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.596355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.596366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.596434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.596445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.596522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.596541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.596635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.596646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.596739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.596749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.596900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.596911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.596979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.596990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.597074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.597085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.694 qpair failed and we were unable to recover it. 00:28:06.694 [2024-11-28 08:26:48.597232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.694 [2024-11-28 08:26:48.597244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.597317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.597328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.597469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.597480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.597554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.597564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.597636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.597648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.597803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.597815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.597964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.597978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.598203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.598216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.598360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.598372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.598452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.598463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.598546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.598558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.598637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.598648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.598741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.598752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.598974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.598987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.599071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.599082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.599226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.599237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.599313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.599324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.599393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.599404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.599485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.599496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.599564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.599575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.599708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.599718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.599805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.599817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.600055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.600067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.600202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.600214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.600295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.600306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.600476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.600489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.600715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.600727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.600871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.600883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.601027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.601040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.601171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.601182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.601314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.601327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.601426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.601438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.601573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.601585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.601661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.601672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.601765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.601776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.601853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.601864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.602006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.602018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.602095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.602106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.695 qpair failed and we were unable to recover it. 00:28:06.695 [2024-11-28 08:26:48.602187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.695 [2024-11-28 08:26:48.602199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.602347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.602380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.602567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.602600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.602713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.602745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.602871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.602905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.603033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.603066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.603190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.603224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.603413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.603425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.603625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.603637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.603782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.603794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.603946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.603964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.604144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.604176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.604306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.604339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.604527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.604560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.604805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.604878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.605025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.605062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.605193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.605231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.605408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.605422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.605501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.605512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.605712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.605724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.605865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.605877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.606081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.606115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.606228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.606260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.606396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.606430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.606540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.606572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.606763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.606796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.606911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.606944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.607082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.607131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.607285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.607296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.607371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.607382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.607587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.607618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.607800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.607833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.608041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.608075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.608265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.608299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.608422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.608456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.608668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.608700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.608887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.608920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.609052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.609065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.696 qpair failed and we were unable to recover it. 00:28:06.696 [2024-11-28 08:26:48.609149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.696 [2024-11-28 08:26:48.609161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.609393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.609426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.609563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.609596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.609737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.609772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.609952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.609964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.610069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.610081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.610222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.610234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.610444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.610477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.610601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.610634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.610757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.610788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.610900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.610944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.611030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.611042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.611193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.611204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.611352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.611385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.611569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.611602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.611788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.611820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.612020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.612042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.612159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.612192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.612306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.612339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.612480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.612514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.612624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.612657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.612847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.612879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.613089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.613123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.613306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.613340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.613468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.613501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.613638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.613670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.613804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.613836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.614109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.614128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.614270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.614286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.614404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.614435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.614692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.614725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.614907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.614939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.615059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.615075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.615220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.615237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.615397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.615413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.615501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.615518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.615695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.615713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.615869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.615902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.697 [2024-11-28 08:26:48.616022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.697 [2024-11-28 08:26:48.616057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.697 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.616277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.616310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.616516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.616531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.616614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.616631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.616708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.616725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.616914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.616965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.617096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.617130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.617319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.617352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.617487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.617520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.617824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.617868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.618038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.618054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.618207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.618241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.618426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.618459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.618714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.618746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.618879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.618912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.619217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.619251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.619367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.619383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.619557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.619590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.619795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.619830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.619968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.620001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.620140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.620173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.620352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.620386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.620567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.620599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.620844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.620877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.621004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.621020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.621208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.621241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.621368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.621401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.621582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.621614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.621735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.621768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.621894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.621929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.622161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.622196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.622332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.622348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.622561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.698 [2024-11-28 08:26:48.622594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.698 qpair failed and we were unable to recover it. 00:28:06.698 [2024-11-28 08:26:48.622730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.622763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.622968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.623002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.623120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.623137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.623225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.623242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.623324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.623340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.623408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.623423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.623604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.623621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.623704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.623719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.623870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.623886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.624097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.624114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.624227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.624259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.624510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.624543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.624681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.624713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.624895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.624931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.625131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.625168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.625362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.625396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.625594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.625627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.625808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.625841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.625980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.626014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.626216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.626249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.626428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.626441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.626606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.626618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.626712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.626724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.626817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.626849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.626987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.627022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.627204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.627217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.627292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.627338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.627533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.627565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.627768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.627800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.627918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.627931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.628081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.628094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.628359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.628392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.628513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.628545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.628675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.628707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.628840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.628873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.629029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.629041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.629223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.629255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.629491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.699 [2024-11-28 08:26:48.629524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.699 qpair failed and we were unable to recover it. 00:28:06.699 [2024-11-28 08:26:48.629703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.629736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.629843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.629855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.629931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.629942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.630116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.630128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.630220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.630232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.630395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.630429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.630606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.630639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.630852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.630884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.630987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.631000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.631147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.631158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.631362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.631374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.631454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.631465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.631548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.631560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.631703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.631739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.631983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.632016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.632155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.632195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.632324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.632340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.632490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.632506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.632713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.632729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.632916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.632932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.633100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.633118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.633313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.633350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.633497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.633531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.633654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.633687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.633815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.633849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.633986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.633999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.634083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.634093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.634257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.634270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.634402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.634414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.634505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.634516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.634726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.634759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.634965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.634999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.635185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.635197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.635341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.635375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.635570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.635602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.635785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.635818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.635944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.635961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.636045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.700 [2024-11-28 08:26:48.636056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.700 qpair failed and we were unable to recover it. 00:28:06.700 [2024-11-28 08:26:48.636202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.636235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.636358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.636392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.636570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.636604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.636738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.636771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.636985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.637022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.637207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.637240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.637470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.637503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.637749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.637782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.637980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.638014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.638194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.638227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.638472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.638506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.638792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.638824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.638943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.638992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.639182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.639215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.639468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.639501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.639741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.639773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.639977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.640011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.640152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.640191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.640345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.640357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.640452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.640463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.640621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.640634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.640709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.640720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.640873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.640885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.640972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.640984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.641042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.641053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.641128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.641140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.641297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.641309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.641472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.641505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.641609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.641644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.641752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.641785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.643029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.643056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.643182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.643195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.643332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.643344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.643484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.643517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.643761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.643795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.643930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.643988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.644109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.644121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.644250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.701 [2024-11-28 08:26:48.644262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.701 qpair failed and we were unable to recover it. 00:28:06.701 [2024-11-28 08:26:48.644426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.644459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.644604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.644637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.644825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.644858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.645055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.645089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.645308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.645341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.645439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.645451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.645605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.645618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.645699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.645709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.645846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.645859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.645942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.645958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.646102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.646115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.646186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.646198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.646283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.646294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.646399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.646431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.646615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.646648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.646770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.646804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.647017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.647051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.648385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.648409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.648504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.648516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.648678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.648693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.648795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.648805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.648885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.648896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.649117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.649130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.649321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.649352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.649544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.649576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.649728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.649761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.650035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.650069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.650202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.650235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.650367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.650377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.650471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.650482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.650562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.650594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.650841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.650872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.651010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.651046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.651268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.651281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.651511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.651523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.651592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.651604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.651673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.651684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.651846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.702 [2024-11-28 08:26:48.651876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.702 qpair failed and we were unable to recover it. 00:28:06.702 [2024-11-28 08:26:48.652031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.652066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.652331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.652364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.652623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.652635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.652714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.652725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.652797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.652808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.652987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.653021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.653153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.653186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.653409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.653442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.653643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.653677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.653792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.653825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.653958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.653992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.654185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.654220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.654421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.654433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.654572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.654584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.654804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.654815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.655016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.655029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.655108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.655119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.655202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.655213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.655293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.655305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.656494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.656516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.656776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.656812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.657027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.657070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.657220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.657252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.657438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.657450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.657582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.657594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.657768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.657780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.657967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.658002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.658137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.658172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.658299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.658332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.658471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.658483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.658636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.658649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.658806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.658819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.658906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.703 [2024-11-28 08:26:48.658916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.703 qpair failed and we were unable to recover it. 00:28:06.703 [2024-11-28 08:26:48.659222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.659256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.659384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.659417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.659618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.659651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.659854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.659888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.660038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.660072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.660261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.660295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.660388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.660398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.660486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.660497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.660554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.660566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.660649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.660662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.660811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.660844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.660973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.661013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.661103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.661115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.661196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.661207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.661349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.661360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.661444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.661456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.661537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.661548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.661632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.661643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.661835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.661868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.662055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.662090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.662210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.662244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.662419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.662431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.662630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.662642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.662731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.662742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.662961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.662995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.663193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.663226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.663357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.663391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.663555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.663588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.663775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.663813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.663999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.664041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.664121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.664132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.664287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.664299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.664362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.664372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.664458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.664469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.664545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.664556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.664643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.664653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.664785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.664796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.664953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.664966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.704 qpair failed and we were unable to recover it. 00:28:06.704 [2024-11-28 08:26:48.665052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.704 [2024-11-28 08:26:48.665062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.665198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.665210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.665356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.665369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.665506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.665518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.665600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.665611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.665743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.665756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.665837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.665848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.666015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.666028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.666107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.666118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.666293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.666305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.666450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.666462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.666685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.666696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.666888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.666899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.667032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.667045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.667210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.667222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.667289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.667299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.667387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.667399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.667538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.667550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.667640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.667651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.667765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.667777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.667862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.667872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.668032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.668045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.668189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.668201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.668271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.668281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.668366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.668376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.668459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.668471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.668614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.668626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.668708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.668719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.668789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.668799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.668878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.668889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.669001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.669015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.669084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.669095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.669226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.669237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.669437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.669450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.669532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.669544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.669623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.669635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.669713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.669724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.669788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.669799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.705 qpair failed and we were unable to recover it. 00:28:06.705 [2024-11-28 08:26:48.669868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.705 [2024-11-28 08:26:48.669879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.669945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.669961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.670033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.670045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.670195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.670206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.670301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.670312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.670443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.670455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.670595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.670607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.670755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.670767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.670919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.670932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.671052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.671065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.671214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.671226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.671297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.671307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.671380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.671391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.671455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.671466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.671545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.671556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.671688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.671699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.671835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.671848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.671937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.671953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.672048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.672059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.672126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.672136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.672290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.672302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.672382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.672394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.672461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.672473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.672547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.672559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.672631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.672641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.672743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.672754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.672910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.672922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.673012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.673024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.673118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.673131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.673196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.673207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.673297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.673309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.673446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.673458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.673546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.673559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.673671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.673683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.673747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.673758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.673834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.673844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.674002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.674014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.674090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.674103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.706 [2024-11-28 08:26:48.674194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.706 [2024-11-28 08:26:48.674205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.706 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.674279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.674291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.674424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.674436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.674525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.674535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.674622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.674634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.674765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.674778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.674912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.674924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.675094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.675107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.675244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.675256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.675321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.675333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.675401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.675412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.675491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.675502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.675579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.675590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.675724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.675735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.675802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.675814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.675963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.675976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.676062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.676073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.676253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.676265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.676411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.676423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.676494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.676506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.676601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.676612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.676712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.676723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.676791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.676802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.676870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.676887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.677019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.677032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.677163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.677174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.677236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.677246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.677377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.677389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.677471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.677484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.677550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.677561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.677660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.677673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.677815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.677827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.677907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.677918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.677998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.678010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.678080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.707 [2024-11-28 08:26:48.678093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.707 qpair failed and we were unable to recover it. 00:28:06.707 [2024-11-28 08:26:48.678231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.678243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.678314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.678325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.678398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.678410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.678608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.678619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.678685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.678697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.678766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.678778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.678867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.678879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.679027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.679040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.679245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.679257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.679401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.679413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.679568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.679580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.679725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.679738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.679823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.679834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.679984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.679997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.680066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.680077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.680162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.680174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.680243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.680256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.680397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.680409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.680555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.680568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.680646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.680658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.680753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.680766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.680934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.680952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.681027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.681040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.681121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.681132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.681209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.681220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.681368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.681381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.681622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.681657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.681942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.681986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.682164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.682199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.682280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.682294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.682615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.682627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.682773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.682785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.682916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.682927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.682997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.683009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.683171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.683183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.683334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.683345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.683411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.683422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.708 [2024-11-28 08:26:48.683490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.708 [2024-11-28 08:26:48.683502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.708 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.683578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.683590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.683662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.683675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.683748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.683760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.683835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.683847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.684011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.684024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.684232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.684243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.684385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.684398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.684597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.684610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.684753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.684765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.684899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.684911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.684973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.684985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.685080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.685092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.685273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.685285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.685461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.685473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.685638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.685650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.685718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.685729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.685799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.685810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.685897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.685910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.686138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.686150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.686235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.686248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.686316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.686329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.686461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.686473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.686558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.686570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.686651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.686663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.686738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.686751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.686832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.686845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.686915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.686926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.687011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.687024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.687129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.687148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.687297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.687314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.687390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.687406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.687486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.687502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.687599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.687615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.687705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.687721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.687797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.687812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.687969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.687986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.688062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.688078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.709 qpair failed and we were unable to recover it. 00:28:06.709 [2024-11-28 08:26:48.688152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.709 [2024-11-28 08:26:48.688167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.688240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.688256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.688353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.688369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.688576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.688590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.688684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.688698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.688787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.688800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.688870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.688880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.689009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.689022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.689104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.689117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.689195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.689207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.689299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.689311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.689379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.689392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.689461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.689472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.689635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.689647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.689782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.689794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.689896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.689909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.690050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.690064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.690130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.690143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.690304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.690316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.690402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.690414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.690556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.690568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.690698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.690710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.690884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.690895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.690992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.691005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.691083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.691097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.691174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.691186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.691298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.691311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.691384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.691395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.691484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.691497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.691575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.691587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.691674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.691686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.691835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.691847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.691916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.691927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.692094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.692107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.692199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.692212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.692346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.692359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.692430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.692442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.710 [2024-11-28 08:26:48.692516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.710 [2024-11-28 08:26:48.692529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.710 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.692667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.692679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.692819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.692832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.692904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.692917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.693053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.693065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.693200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.693212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.693280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.693292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.693450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.693464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.693580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.693592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.693675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.693687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.693822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.693833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.694036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.694048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.694295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.694308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.694461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.694472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.694543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.694554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.694627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.694638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.694750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.694762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.694836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.694848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.695057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.695070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.695295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.695307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.695463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.695474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.695557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.695570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.695664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.695676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.695891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.695903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.696042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.696056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.696127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.696139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.696216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.696228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.696290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.696302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.696454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.696466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.696551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.696564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.696719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.696731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.696946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.696962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.697167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.697179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.697326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.697338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.697432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.697444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.697529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.697542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.697700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.697713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.697865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.697878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.711 [2024-11-28 08:26:48.698083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.711 [2024-11-28 08:26:48.698095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.711 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.698187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.698199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.698336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.698348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.698444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.698456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.698536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.698548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.698683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.698695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.698780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.698792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.698876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.698888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.699025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.699037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.699109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.699123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.699201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.699213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.699300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.699312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.699444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.699456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.699520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.699533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.699608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.699620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.699750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.699762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.699896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.699908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.700067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.700081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.700147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.700159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.700232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.700244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.700314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.700326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.700481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.700493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.700560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.700573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.700643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.700656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.700738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.700750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.700841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.700853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.700929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.700941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.701077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.701089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.701230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.701241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.701321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.701333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.701501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.701513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.701608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.701620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.701692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.701704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.701857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.701869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.712 qpair failed and we were unable to recover it. 00:28:06.712 [2024-11-28 08:26:48.701937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.712 [2024-11-28 08:26:48.701954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.702046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.702059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.702206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.702221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.702297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.702308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.702388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.702400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.702481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.702494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.702569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.702581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.702724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.702737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.702875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.702887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.703032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.703044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.703135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.703148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.703232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.703244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.703314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.703326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.703478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.703490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.703636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.703648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.703712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.703724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.703804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.703817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.703894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.703906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.703970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.703982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.704069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.704082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.704160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.704172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.704239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.704250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.704332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.704345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.704425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.704437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.704521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.704533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.704665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.704676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.704762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.704774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.704842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.704855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.704918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.704930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.705001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.705013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.705079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.705091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.705227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.705239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.705325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.705337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.705433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.705446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.705519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.705531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.705609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.705621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.705701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.705713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.705869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.705881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.713 [2024-11-28 08:26:48.706025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.713 [2024-11-28 08:26:48.706038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.713 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.706134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.706146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.706244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.706256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.706410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.706422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.706597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.706614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.706764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.706777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.706847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.706859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.706953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.706965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.707050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.707062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.707130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.707142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.707228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.707240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.707381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.707398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.707512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.707524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.707592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.707605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.707701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.707713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.707842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.707855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.707997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.708011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.708075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.708087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.708312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.708324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.708467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.708479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.708618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.708630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.708713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.708725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.708871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.708884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.708977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.708990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.709063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.709075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.709214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.709227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.709384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.709396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.709460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.709473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.709562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.709575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.709716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.709729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.709812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.709825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.709976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.709989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.710053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.710065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.710138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.710152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.710305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.710317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.710412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.710423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.710491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.710503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.710647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.710659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.714 qpair failed and we were unable to recover it. 00:28:06.714 [2024-11-28 08:26:48.710795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.714 [2024-11-28 08:26:48.710807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.710873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.710884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.711025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.711039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.711111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.711123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.711268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.711280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.711480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.711492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.711583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.711597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.711735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.711747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.711877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.711889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.711978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.711990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.712189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.712201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.712357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.712369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.712438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.712451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.712600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.712628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.712726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.712738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.712816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.712828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.712912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.712925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.713067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.713080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.713155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.713167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.713308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.713320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.713400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.713413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.713473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.713484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.713585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.713597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.713692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.713704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.713769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.713782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.713852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.713864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.714006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.714020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.714114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.714128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.714213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.714225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.714427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.714439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.714536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.714549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.714624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.714636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.714773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.714784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.714865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.714877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.715038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.715050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.715122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.715134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.715219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.715231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.715407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.715418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.715 qpair failed and we were unable to recover it. 00:28:06.715 [2024-11-28 08:26:48.715497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.715 [2024-11-28 08:26:48.715509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.715603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.715614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.715685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.715697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.715873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.715886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.716037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.716050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.716192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.716203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.716278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.716290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.716450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.716463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.716531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.716545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.716747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.716759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.716893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.716905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.717053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.717065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.717133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.717145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.717321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.717333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.717510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.717522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.717658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.717671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.717829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.717841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.717988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.718000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.718071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.718084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.718186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.718209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.718309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.718323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.718410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.718422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.718570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.718583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.718670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.718684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.718750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.718762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.718849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.718861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.718993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.719006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.719100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.719112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.719178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.719190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.719335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.719348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.719443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.719455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.719681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.719693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.719862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.719874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.720029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.720042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.720126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.720138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.720282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.720294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.720378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.720390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.720473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.720484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.716 qpair failed and we were unable to recover it. 00:28:06.716 [2024-11-28 08:26:48.720642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.716 [2024-11-28 08:26:48.720655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.720789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.720802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.720881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.720893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.721063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.721077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.721210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.721222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.721310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.721322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.721456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.721469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.721541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.721554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.721620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.721633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.721772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.721785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.721926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.721940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.722045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.722057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.722127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.722139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.722279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.722292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.722371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.722383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.722456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.722468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.722557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.722570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.722669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.722681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.722758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.722771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.722853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.722865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.723019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.723031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.723101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.723112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.723199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.723211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.723286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.723298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.723522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.723535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.723621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.723634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.723717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.723729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.723859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.723871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.724076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.724089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.724178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.724190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.724260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.724273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.724366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.724380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.724523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.724535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.724620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.724633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.717 qpair failed and we were unable to recover it. 00:28:06.717 [2024-11-28 08:26:48.724702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.717 [2024-11-28 08:26:48.724714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.724806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.724818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.724966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.724979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.725061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.725073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.725150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.725162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.725233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.725245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.725326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.725338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.725470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.725482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.725570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.725582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.725651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.725663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.725750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.725763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.725966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.725978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.726114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.726126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.726202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.726215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.726294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.726306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.726441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.726453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.726531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.726545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.726694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.726707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.726799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.726811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.726876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.726888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.726985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.726998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.727134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.727147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.727223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.727235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.727377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.727390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.727547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.727559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.727778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.727790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.727941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.727959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.728061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.728073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.728215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.728228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.728407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.728420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.728512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.728525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.728686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.728700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.728876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.728889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.728964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.728976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.729046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.729058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.729222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.729235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.729378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.729391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.729608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.729620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.718 qpair failed and we were unable to recover it. 00:28:06.718 [2024-11-28 08:26:48.729792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.718 [2024-11-28 08:26:48.729804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.729974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.729986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.730060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.730072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.730178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.730191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.730322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.730335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.730494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.730506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.730639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.730652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.730749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.730761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.730911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.730924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.731005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.731017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.731169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.731182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.731370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.731382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.731464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.731476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.731559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.731571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.731659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.731671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.731736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.731747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.731841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.731854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.731919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.731932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.732004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.732019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.732095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.732108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.732284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.732295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.732432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.732444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.732585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.732598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.732738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.732751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.732838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.732851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.732923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.732935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.733026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.733038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.733211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.733223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.733306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.733318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.733389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.733402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.733490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.733502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.733584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.733596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.733745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.733758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.733909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.733922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.734028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.734040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.734119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.734131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.734208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.734220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.734445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.734457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.719 qpair failed and we were unable to recover it. 00:28:06.719 [2024-11-28 08:26:48.734614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.719 [2024-11-28 08:26:48.734626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.734800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.734812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.734894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.734906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.734990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.735003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.735071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.735083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.735170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.735183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.735406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.735418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.735489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.735502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.735640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.735652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.735743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.735756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.735892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.735904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.735994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.736006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.736141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.736152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.736239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.736251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.736343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.736354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.736488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.736501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.736583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.736596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.736691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.736703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.736784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.736795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.736867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.736879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.736950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.736964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.737154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.737168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.737319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.737331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.737404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.737417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.737487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.737499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.737592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.737605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.737754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.737767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.737925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.737937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.738038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.738051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.738143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.738154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.738308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.738320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.738380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.738392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.738480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.738492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.738577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.738590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.738662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.738674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.738816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.738828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.738891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.738903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.720 qpair failed and we were unable to recover it. 00:28:06.720 [2024-11-28 08:26:48.739036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.720 [2024-11-28 08:26:48.739049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.739133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.739145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.739216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.739228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.739382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.739395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.739482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.739494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.739560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.739589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.739670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.739682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.739776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.739787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.739866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.739878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.740026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.740039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.740196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.740208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.740305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.740318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.740392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.740404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.740541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.740564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.740631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.740643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.740708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.740719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.740782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.740794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.740929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.740941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.741027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.741039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.741172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.741184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.741271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.741283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.741349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.741360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.741430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.741442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.741521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.741552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.741750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.741775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.741877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.741895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.741984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.742001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.742102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.742117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.742205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.742220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.742363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.742379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.742452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.742468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.742570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.742585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.742751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.721 [2024-11-28 08:26:48.742767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.721 qpair failed and we were unable to recover it. 00:28:06.721 [2024-11-28 08:26:48.742851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.742868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.743023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.743040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.743133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.743149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.743239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.743255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.743429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.743445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.743530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.743547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.743691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.743709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.743852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.743869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.743959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.743975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.744137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.744153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.744232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.744249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.744404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.744420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.744492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.744508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.744666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.744682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.744770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.744787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.744872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.744886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.745101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.745114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.745267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.745279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.745370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.745382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.745449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.745461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.745556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.745568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.745663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.745675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.745753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.745766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.745927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.745938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.746087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.746100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.746234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.746247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.746387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.746398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.746502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.746514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.746657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.746670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.746888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.746900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.747046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.747059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.747210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.747222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.747299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.747311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.747389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.747401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.747630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.747643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.747849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.747862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.748083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.748095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.748238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.722 [2024-11-28 08:26:48.748250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.722 qpair failed and we were unable to recover it. 00:28:06.722 [2024-11-28 08:26:48.748420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.748431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.748514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.748526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.748671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.748683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.748815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.748827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.748901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.748914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.748997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.749010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.749155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.749169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.749246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.749259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.749340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.749352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.749429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.749457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.749529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.749541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.749679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.749691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.749755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.749767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.749900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.749912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.749985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.749998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.750092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.750105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.750175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.750188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.750401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.750414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.750487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.750501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.750582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.750595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.750681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.750693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.750871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.750884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.751056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.751069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.751156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.751169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.751307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.751319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.751491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.751503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.751680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.751693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.751843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.751854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.751922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.751934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.752026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.752040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.752174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.752185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.752259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.752271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.752472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.752484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.752577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.752590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.752738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.752750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.752847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.752860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.753024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.753037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.753260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.723 [2024-11-28 08:26:48.753272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.723 qpair failed and we were unable to recover it. 00:28:06.723 [2024-11-28 08:26:48.753340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.753352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.753428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.753442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.753515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.753526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.753608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.753619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.753707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.753720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.753932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.753945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.754019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.754031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.754187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.754200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.754336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.754349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.754421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.754433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.754569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.754582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.754720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.754732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.754799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.754811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.754880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.754891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.754987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.754999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.755152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.755164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.755248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.755261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.755341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.755352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.755480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.755492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.755575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.755587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.755721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.755734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.755911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.755923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.756009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.756021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.756108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.756120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.756214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.756226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.756293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.756305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.756374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.756387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.756453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.756465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.756543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.756556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.756634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.756646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.756731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.756742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.756826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.756838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.756979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.756991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.757124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.757137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.757212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.757223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.757293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.757305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.757388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.757400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.724 [2024-11-28 08:26:48.757545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.724 [2024-11-28 08:26:48.757556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.724 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.757624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.757637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.757728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.757740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.757892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.757904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.757994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.758008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.758119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.758131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.758201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.758213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.758285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.758297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.758447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.758459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.758610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.758622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.758821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.758833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.759036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.759054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.759190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.759202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.759358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.759370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.759583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.759595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.759730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.759742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.759888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.759900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.760046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.760058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.760200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.760212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.760383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.760395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.760543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.760556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.760699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.760711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.760914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.760927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.761088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.761101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.761178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.761189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.761281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.761293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.761366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.761379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.761449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.761462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.761536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.761549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.761792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.761805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.762052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.762065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.762195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.762207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.762297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.762310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.762463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.762475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.762554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.762567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.762660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.762672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.762751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.762763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.762912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.762925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.725 [2024-11-28 08:26:48.762997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.725 [2024-11-28 08:26:48.763010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.725 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.763161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.763174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.763376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.763388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.763476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.763488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.763626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.763638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.763739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.763751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.763886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.763898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.763985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.763997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.764071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.764083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.764183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.764196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.764262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.764274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.764359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.764371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.764503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.764515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.764716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.764730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.764815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.764828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.764899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.764912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.765051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.765064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.765128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.765139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.765297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.765309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.765398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.765410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.765586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.765598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.765665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.765677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.765758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.765770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.765928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.765940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.766025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.766038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.766181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.766193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.766366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.766379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.766539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.766552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.766620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.766632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.766710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.766721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.766805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.766816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.766941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.766965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.767055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.767067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.767155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.726 [2024-11-28 08:26:48.767167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.726 qpair failed and we were unable to recover it. 00:28:06.726 [2024-11-28 08:26:48.767339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.767350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.767424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.767436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.767583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.767594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.767663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.767675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.767808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.767819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.767954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.767967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.768105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.768117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.768194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.768206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.768345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.768356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.768556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.768568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.768643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.768654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.768814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.768826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.769076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.769088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.769220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.769232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.769317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.769328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.769469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.769480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.769703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.769715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.769853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.769865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.770053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.770066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.770150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.770163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.770244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.770256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.770410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.770422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.770491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.770503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.770598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.770610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.770696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.770708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.770796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.770808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.770952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.770964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.771107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.771118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.771282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.771294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.771368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.771380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.771515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.771527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.771605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.771617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.771759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.771770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.771944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.771960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.772092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.772105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.772189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.772201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.772278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.772290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.772363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.727 [2024-11-28 08:26:48.772375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.727 qpair failed and we were unable to recover it. 00:28:06.727 [2024-11-28 08:26:48.772515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.772527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.772677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.772689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.772765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.772777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.772848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.772860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.772943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.772958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.773171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.773184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.773263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.773275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.773349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.773361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.773507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.773519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.773628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.773640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.773769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.773781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.773848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.773860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.774038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.774052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.774207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.774219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.774352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.774364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.774445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.774458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.774530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.774543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.774672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.774684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.774835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.774848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.775003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.775016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.775175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.775187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.775322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.775336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.775479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.775491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.775697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.775708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.775862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.775874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.776028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.776041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.776177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.776189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.776333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.776346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.776498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.776510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.776656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.776669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.776812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.776824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.776906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.776918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.777082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.777095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.777300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.777313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.777389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.777401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.777603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.777615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.777695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.777707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.777864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.777877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.728 qpair failed and we were unable to recover it. 00:28:06.728 [2024-11-28 08:26:48.778025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.728 [2024-11-28 08:26:48.778037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.778120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.778132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.778332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.778344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.778436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.778448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.778615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.778627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.778786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.778798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.778932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.778945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.779047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.779059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.779204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.779216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.779414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.779426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.779535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.779571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.779688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.779725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.779825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.779843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.779987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.780001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.780154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.780166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.780233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.780244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.780394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.780406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.780570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.780582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.780715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.780727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.780799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.780811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.780972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.780985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.781089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.781101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.781188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.781200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.781298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.781312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.781400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.781412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.781472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.781484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.781640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.781652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.781728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.781740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.781817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.781828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.781900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.781912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.781991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.782004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.782147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.782160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.782241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.782253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.782435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.782447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.782524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.782536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.782601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.782613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.782677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.782690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.782828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.729 [2024-11-28 08:26:48.782840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.729 qpair failed and we were unable to recover it. 00:28:06.729 [2024-11-28 08:26:48.782911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.782923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.783016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.783028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.783098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.783109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.783184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.783196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.783402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.783414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.783501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.783513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.783588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.783600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.783732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.783744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.783954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.783966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.784031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.784043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.784139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.784152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.784233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.784245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.784328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.784348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.784452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.784469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.784557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.784573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.784650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.784666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.784764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.784781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.784926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.784941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.785091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.785107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.785320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.785336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.785568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.785584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.785684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.785700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.785796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.785812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.785896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.785912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.786013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.786030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.786118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.786138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.786229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.786245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.786343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.786358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.786512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.786527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.786670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.786684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.786767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.786782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.786869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.786884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.786965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.786981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.730 [2024-11-28 08:26:48.787083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.730 [2024-11-28 08:26:48.787098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.730 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.787238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.787254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.787401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.787416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.787501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.787517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.787743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.787759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.787843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.787859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.788022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.788038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.788310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.788326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.788409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.788425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.788647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.788662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.788755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.788770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.788868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.788883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.789034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.789050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.789221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.789236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.789390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.789405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.789556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.789573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.789654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.789670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.789835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.789851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.790093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.790109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.790265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.790289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.790449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.790465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.790549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.790565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.790738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.790753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.790861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.790876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.791070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.791087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.791297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.791313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.791400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.791416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.791500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.791516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.791730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.791746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.791813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.791829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.792036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.792052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.792145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.792161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.792255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.792272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.792432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.792447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.792528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.792544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.792721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.792738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.792842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.792858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.793026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.793043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.731 [2024-11-28 08:26:48.793149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.731 [2024-11-28 08:26:48.793165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.731 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.793264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.793280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.793422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.793437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.793529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.793545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.793680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.793695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.793912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.793927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.794093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.794110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.794193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.794209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.794368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.794386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.794475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.794491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.794644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.794660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.794757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.794773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.794925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.794941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.795099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.795116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.795240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.795256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.795338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.795353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.795462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.795477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.795626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.795642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.795802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.795818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.795975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.795991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.796207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.796223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.796326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.796342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.796500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.796516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.796606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.796622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.796713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.796729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.796812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.796827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.796902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.796918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.797023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.797039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.797248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.797264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.797374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.797390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.797468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.797483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.797627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.797643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.797799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.797815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.797978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.797993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.798074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.798089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.798240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.798257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.798339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.798355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.798506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.798523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.798758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.732 [2024-11-28 08:26:48.798774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.732 qpair failed and we were unable to recover it. 00:28:06.732 [2024-11-28 08:26:48.798957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.798974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.799052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.799067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.799146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.799162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.799313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.799328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.799412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.799428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.799518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.799534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.799674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.799690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.799879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.799895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.799987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.800004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.800155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.800171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.800331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.800346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.800502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.800514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.800713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.800725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.800808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.800820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.800886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.800897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.800995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.801007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.801073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.801084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.801160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.801171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.801315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.801327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.801469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.801481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.801601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.801613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.801711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.801722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.801799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.801811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.801942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.801963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.802035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.802048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.802135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.802147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.802216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.802228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.802433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.802446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.802521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.802533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.802702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.802715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.802795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.802807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.802966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.802978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.803066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.803078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.803154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.803166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.803254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.803266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.803419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.803431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.803499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.803511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.803701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.733 [2024-11-28 08:26:48.803713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.733 qpair failed and we were unable to recover it. 00:28:06.733 [2024-11-28 08:26:48.803856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.803867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.803966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.803978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.804124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.804136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.804297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.804309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.804381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.804393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.804521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.804533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.804615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.804627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.804814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.804825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.804991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.805004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.805154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.805167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.805236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.805248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.805382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.805394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.805594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.805622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.805696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.805708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.805796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.805808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.805946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.805963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.806018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.806039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.806202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.806214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.806286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.806298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.806379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.806392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.806544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.806556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.806648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.806660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.806726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.806738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.806883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.806895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.807044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.807057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.807135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.807148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.807227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.807239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.807326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.807338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.807420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.807432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.807602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.807614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.807748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.807761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.807914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.807926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.808019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.808033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.808167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.808180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.808324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.808337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.808482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.808493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.808558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.808571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.808660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.734 [2024-11-28 08:26:48.808672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.734 qpair failed and we were unable to recover it. 00:28:06.734 [2024-11-28 08:26:48.808818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.808830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.808979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.808992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.809156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.809168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.809249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.809261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.809345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.809357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.809500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.809512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.809605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.809616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.809709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.809721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.809862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.809874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.810016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.810029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.810133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.810145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.810225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.810237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.810378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.810391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.810461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.810474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.810604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.810619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.810770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.810782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.811056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.811068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.811298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.811310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.811446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.811458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.811608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.811621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.811818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.811831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.812034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.812047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.812136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.812147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.812286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.812298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.812362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.812375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.812468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.812480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.812573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.812585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.812648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.812661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.812747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.812760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.812858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.812871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.813003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.813016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.813160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.813172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.813315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.813328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.813463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.735 [2024-11-28 08:26:48.813476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.735 qpair failed and we were unable to recover it. 00:28:06.735 [2024-11-28 08:26:48.813730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.813742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.813874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.813887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.814021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.814034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.814257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.814270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.814415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.814427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.814516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.814529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.814629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.814641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.814787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.814799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.815014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.815026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.815120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.815133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.815338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.815350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.815602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.815614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.815852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.815864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.815932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.815944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.816032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.816045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.816139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.816162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.816307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.816319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.816388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.816400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.816477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.816489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.816637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.816649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.816819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.816833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.817092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.817106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.817237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.817249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.817399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.817411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.817565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.817577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.817721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.817732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.817881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.817894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.817990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.818002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.818095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.818107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.818251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.818264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.818345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.818357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.818490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.818502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.818598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.818610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.818753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.818765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.818966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.818979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.819115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.819128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.819216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.819228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.819323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.736 [2024-11-28 08:26:48.819335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.736 qpair failed and we were unable to recover it. 00:28:06.736 [2024-11-28 08:26:48.819497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.819509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.819660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.819673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.819828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.819840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.819967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.819979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.820064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.820076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.820150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.820161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.820321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.820333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.820476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.820488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.820566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.820579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.820668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.820681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.820814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.820825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.820890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.820902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.821052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.821064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.821131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.821142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.821299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.821311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.821462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.821474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.821553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.821565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.821657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.821669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.821800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.821811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.821942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.821960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.822092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.822104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.822238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.822250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.822324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.822338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.822471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.822483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.822648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.822660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.822857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.822869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.823018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.823031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.823233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.823245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.823376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.823388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.823571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.823583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.823714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.823726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.823897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.823909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.824145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.824157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.824402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.824414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.824564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.824576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.824724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.824736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.824887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.824899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.737 qpair failed and we were unable to recover it. 00:28:06.737 [2024-11-28 08:26:48.825145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.737 [2024-11-28 08:26:48.825157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.825302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.825314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.825542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.825554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.825776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.825788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.826035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.826048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.826195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.826207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.826373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.826385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.826553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.826565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.826701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.826713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.826857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.826869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.826966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.826978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.827211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.827223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.827316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.827328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.827418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.827431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.827525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.827537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.827757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.827769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.827923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.827934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.828124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.828145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.828370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.828387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.828588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.828604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.828840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.828856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.829059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.829077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.829333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.829349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.829530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.829543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.829611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.829623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.829835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.829848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.830017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.830030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.830116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.830128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.830363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.830375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.830584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.830596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.830747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.830760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.831000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.831013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.831147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.831159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.831301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.831312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.831463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.831475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.831554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.831566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.831765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.831777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.831906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.831918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.832120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.832132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.832209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.738 [2024-11-28 08:26:48.832221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.738 qpair failed and we were unable to recover it. 00:28:06.738 [2024-11-28 08:26:48.832316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.832328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.832474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.832485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.832565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.832578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.832779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.832791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.832884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.832896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.833061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.833073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.833244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.833256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.833356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.833369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.833434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.833446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.833537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.833550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.833764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.833776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.833905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.833917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.834134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.834154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.834256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.834273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.834483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.834498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.834570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.834586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.834743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.834759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.834923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.834938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.835104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.835118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.835192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.835204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.835279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.835291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.835459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.835471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.835567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.835579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.835730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.835742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.835811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.835823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.835900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.835915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.835997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.836010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.836146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.836158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.836240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.836252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.836344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.836356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.836430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.836442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.836535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.836546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.836617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.836629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.836707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.836719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.836783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.836795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.836928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.836940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.739 qpair failed and we were unable to recover it. 00:28:06.739 [2024-11-28 08:26:48.837023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.739 [2024-11-28 08:26:48.837036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.837106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.837118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.837200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.837212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.837295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.837307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.837456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.837468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.837539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.837551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.837628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.837639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.837716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.837728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.837853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.837865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.837941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.837958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.838025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.838038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.838112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.838125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.838216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.838228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.838303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.838316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.838450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.838462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.838535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.838547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.838700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.838719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.838876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.838895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.839045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.839061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.839145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.839161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.839303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.839319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.839476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.839492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.839564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.839578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.839661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.839673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.839754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.839766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.839907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.839919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.840002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.840015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.840153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.740 [2024-11-28 08:26:48.840165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.740 qpair failed and we were unable to recover it. 00:28:06.740 [2024-11-28 08:26:48.840289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.840301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.840388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.840402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.840476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.840487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.840634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.840646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.840726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.840738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.840810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.840822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.840897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.840909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.840988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.841000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.841156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.841168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.841300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.841312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.841448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.841461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.841604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.841616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.841723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.841735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.841805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.841818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.841968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.841980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.842152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.842164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.842295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.842307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.842451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.842463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.842608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.842620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.842702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.842714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.842791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.842803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.843056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.843068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.843164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.843176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.843335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.843347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.843507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.843519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.843719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.843731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.843875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.843887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.844049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.844062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.844212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.844231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.844312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.844328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.844421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.844437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.844574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.844590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.844763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.844779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.844939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.844960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.845054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.845069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.845215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.845231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.845343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.845358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.845451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.741 [2024-11-28 08:26:48.845466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.741 qpair failed and we were unable to recover it. 00:28:06.741 [2024-11-28 08:26:48.845608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.845623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.845833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.845848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.845953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.845969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.846108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.846123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.846274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.846290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.846524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.846539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.846693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.846708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.846887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.846902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.847107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.847124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.847223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.847239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.847383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.847399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.847475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.847491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.847576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.847591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.847683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.847699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.847833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.847849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.848035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.848051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.848196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.848211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.848439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.848458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.848555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.848571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.848711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.848726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.848867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.848882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.848992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.849008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.849226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.849242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.849404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.849420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.849573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.849588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.849796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.849812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.849895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.849911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.850078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.850095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.850185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.850201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.850374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.850389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.850489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.850505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.850593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.850609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.850750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.850766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.850916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.850932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.851113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.851131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.851230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.851244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.851326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.851338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.851469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-11-28 08:26:48.851481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.742 qpair failed and we were unable to recover it. 00:28:06.742 [2024-11-28 08:26:48.851573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.851585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.851734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.851746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.851945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.851964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.852176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.852188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.852244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.852256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.852339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.852352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.852434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.852448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.852595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.852608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.852689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.852701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.852774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.852786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.852932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.852944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.853096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.853109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.853193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.853206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.853281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.853293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.853368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.853380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.853523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.853536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.853705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.853717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.853783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.853795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.853860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.853872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.853963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.853992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.854049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.854061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.854152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.854165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.854265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.854277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.854353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.854366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.854453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.854465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.854674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.854687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.854772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.854785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.854971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.854984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.855070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.855083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.855161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.855174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.855338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.855351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.855447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.855460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.855684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.855696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.855869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.855887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.855982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.855998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.856141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.856157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.856267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.856282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.856354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-11-28 08:26:48.856369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.743 qpair failed and we were unable to recover it. 00:28:06.743 [2024-11-28 08:26:48.856462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.856478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.856629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.856644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.856806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.856822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.857038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.857054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.857132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.857148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.857379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.857395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.857545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.857561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.857737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.857753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.857916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.857935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.858042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.858058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.858150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.858166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.858359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.858374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.858618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.858633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.858718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.858733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.858841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.858857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.859065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.859081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.859230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.859247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.859346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.859362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.859521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.859537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.859634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.859649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.859808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.859823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.859980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.859996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.860090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.860106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.860336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.860351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.860445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.860461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.860566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.860582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.860740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.860756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.860852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.860867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.861015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.861031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.861119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-11-28 08:26:48.861135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.744 qpair failed and we were unable to recover it. 00:28:06.744 [2024-11-28 08:26:48.861291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.861307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.861460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.861475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.861547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.861563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.861760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.861776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.861867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.861883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.861990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.862009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.862097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.862113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.862288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.862304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.862481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.862497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.862581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.862597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.862806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.862822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.862922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.862938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.863061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.863077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.863226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.863241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.863346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.863362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.863437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.863453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.863597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.863612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.863713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.863729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.863934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.863960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.864035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.864051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.864145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.864160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.864323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.864339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.864432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.864448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.864553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.864569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.864711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.864726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.864865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.864881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.864963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.864980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.865079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.865095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.865260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.865276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.865425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.865441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.865628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.865645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.865871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.865887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.866050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.866067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.866147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.866163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.866302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.866318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.866526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.866541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.866689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.866705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.866940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.745 [2024-11-28 08:26:48.866961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.745 qpair failed and we were unable to recover it. 00:28:06.745 [2024-11-28 08:26:48.867225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.867242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.867447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.867463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.867608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.867624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.867782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.867798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.867976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.867992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.868070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.868086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.868245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.868261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.868512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.868532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.868611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.868624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.868855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.868867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.869084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.869097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.869176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.869188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.869411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.869423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.869571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.869582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.869798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.869810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.870010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.870023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.870183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.870195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.870421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.870433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.870513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.870525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.870753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.870765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.870962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.870977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.871124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.871136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.871214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.871227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.871365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.871376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.871581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.871593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.871688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.871700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.871890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.871902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.872142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.872154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.872302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.872314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.872521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.872533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.872693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.872705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.872918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.872930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.873086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.873098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.873321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.873333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.873415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.873426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.873649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.873660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.873857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.873869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.874068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.746 [2024-11-28 08:26:48.874081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.746 qpair failed and we were unable to recover it. 00:28:06.746 [2024-11-28 08:26:48.874306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.874318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.874558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.874570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.874726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.874738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.874834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.874847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.874999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.875012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.875184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.875196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.875279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.875291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.875512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.875524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.875760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.875772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.875974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.875992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.876252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.876270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.876444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.876460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.876627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.876642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.876875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.876890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.877109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.877125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.877356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.877372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.877476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.877491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.877593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.877608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.877765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.877780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.877995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.878011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.878103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.878119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.878354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.878370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.878531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.878546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.878755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.878771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.878958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.878975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.879208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.879224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.879378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.879394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.879632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.879648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.879842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.879858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.880071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.880088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.880259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.880275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.880503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.880519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.880682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.880698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.880906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.880921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.881160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.881178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.881355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.881371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.881531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.881550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.881721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.881736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.747 qpair failed and we were unable to recover it. 00:28:06.747 [2024-11-28 08:26:48.881894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.747 [2024-11-28 08:26:48.881910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.882106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.882122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.882272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.882287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.882487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.882503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.882737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.882753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.882904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.882920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.883137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.883154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.883309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.883325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.883401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.883417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.883674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.883690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.883905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.883921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.884099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.884116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.884294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.884311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.884455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.884470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.884631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.884647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.884825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.884841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.884934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.884956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.885186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.885202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.885436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.885452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.885561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.885577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.885675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.885690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.885856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.885872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.885978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.885995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.886154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.886170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.886385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.886401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.886633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.886649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.886792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.886808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.887058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.887074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.887184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.887200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.887348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.887364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.887436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.887452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.887610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.887626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.887778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.887794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.887941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.887962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.888041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.888056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.888165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.888181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.888349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.888365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.888593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.888609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.748 qpair failed and we were unable to recover it. 00:28:06.748 [2024-11-28 08:26:48.888765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.748 [2024-11-28 08:26:48.888781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.888952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.888971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.889048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.889064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.889294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.889309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.889459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.889475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.889615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.889631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.889723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.889739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.889905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.889921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.890091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.890107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.890337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.890353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.890427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.890443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.890700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.890715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.890925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.890942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.891208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.891224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.891462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.891478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.891636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.891652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.891805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.891821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.891975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.891992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.892144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.892160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.892318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.892334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.892518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.892534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.892795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.892810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.892923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.892939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.893097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.893113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.893283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.893298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.893507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.893523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.893687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.893702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.893938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.893959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.894169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.894188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.894333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.894349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.894536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.894552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.894797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.894813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.894973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.894990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.895230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.895246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.749 qpair failed and we were unable to recover it. 00:28:06.749 [2024-11-28 08:26:48.895341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.749 [2024-11-28 08:26:48.895357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.895536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.895552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.895763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.895779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.895926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.895941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.896166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.896182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.896363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.896379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.896559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.896575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.896807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.896822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.897077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.897095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.897321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.897336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.897514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.897530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.897739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.897755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.898009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.898026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.898173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.898189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.898420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.898436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.898595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.898611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.898842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.898858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.899016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.899033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.899240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.899256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.899415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.899431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.899650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.899666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.899831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.899847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.900081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.900098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.900306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.900322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.900549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.900565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.900817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.900833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.901042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.901059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.901325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.901341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.901549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.901565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.901655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.901670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.901901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.901917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.902012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.902028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.902273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.902289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.902528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.902544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.902723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.902738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.902954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.902975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.903234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.903250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.903456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.903473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.750 [2024-11-28 08:26:48.903709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.750 [2024-11-28 08:26:48.903724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.750 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.903903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.903919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.904131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.904147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.904401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.904417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.904568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.904583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.904775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.904791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.904942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.904965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.905108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.905124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.905293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.905309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.905461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.905477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.905648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.905664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.905895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.905911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.906211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.906228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.906436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.906452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.906538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.906554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.906809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.906825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.906994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.907011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.907265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.907281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.907490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.907506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.907663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.907680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.907896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.907912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.908082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.908098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.908284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.908300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.908524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.908540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.908706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.908725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.908915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.908931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.909109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.909123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.909320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.909332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.909503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.909514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.909686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.909697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.909870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.909882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.909975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.909987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.910192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.910204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.910422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.910434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.910604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.910616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.910853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.910865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.911102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.911114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.911283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.911295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.751 [2024-11-28 08:26:48.911433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-11-28 08:26:48.911445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.751 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.911645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.911657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.911832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.911844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.911994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.912007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.912175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.912188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.912325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.912337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.912436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.912448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.912681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.912693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.912896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.912908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.913087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.913099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.913194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.913206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.913377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.913389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.913547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.913560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.913670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.913692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.913791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.913808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.913909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.913925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.914085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.914101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.914329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.914344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.914566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.914582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.914792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.914807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.915031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.915048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.915270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.915286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.915548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.915563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.915726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.915743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.915893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.915909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.916123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.916138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.916224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.916243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.916406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.916422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.916654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.916670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.916903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.916919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.917130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.917146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.917242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.917257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.917432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.917448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.917600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.917616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.917763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.917778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.918010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.918026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.918118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.918134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.918380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.918395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.752 [2024-11-28 08:26:48.918579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.752 [2024-11-28 08:26:48.918594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.752 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.918748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.918764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.919008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.919025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.919168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.919184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.919390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.919405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.919585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.919600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.919831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.919846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.919940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.919960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.920207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.920223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.920465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.920480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.920719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.920735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.920894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.920910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.921064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.921081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.921236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.921252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.921421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.921436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.921662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.921681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.921773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.921789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.921994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.922010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.922192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.922207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.922357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.922373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.922480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.922496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.922705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.922721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.922861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.922877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.923106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.923123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.923321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.923336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.923550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.923565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.923723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.923739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.923975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.923992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.924143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.924158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.924373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.924389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.924532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.924549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.924735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.924751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.924961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.924977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.925161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.925177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.753 qpair failed and we were unable to recover it. 00:28:06.753 [2024-11-28 08:26:48.925410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.753 [2024-11-28 08:26:48.925426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.925664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.925680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.925917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.925932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.926092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.926108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.926277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.926293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.926455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.926471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.926677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.926694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.926911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.926926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.927166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.927182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.927364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.927380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.927615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.927630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.927865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.927880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.928032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.928048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.928258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.928273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.928497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.928512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.928741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.928757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.928853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.928868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.929027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.929043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.929249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.929266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.929473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.929488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.929658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.929674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.929883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.929902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.930131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.930147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.930314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.930330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.930440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.930456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.930557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.930573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.930781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.930797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.931006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.931023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.931257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.931272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.931371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.931387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.931619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.931634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.931789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.931805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.932032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.932048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.932251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.932267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.932499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.932515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.932739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.932756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.932964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.932980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.933211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.754 [2024-11-28 08:26:48.933227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.754 qpair failed and we were unable to recover it. 00:28:06.754 [2024-11-28 08:26:48.933309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.933325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.933470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.933485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.933718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.933734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.933969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.933986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.934221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.934237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.934396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.934412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.934555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.934570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.934798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.934814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.935044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.935060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.935242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.935258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.935514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.935530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.935762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.935778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.935921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.935936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.936114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.936131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.936388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.936404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.936632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.936647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.936821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.936837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.936999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.937016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.937225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.937241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.937448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.937464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.937671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.937686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.937917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.937933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.938093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.938109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.938344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.938362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.938594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.938610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.938700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.938716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.938856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.938872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.939097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.939114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.939207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.939223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.939373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.939388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.939621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.939637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.939844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.939860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.940093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.940109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.940376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.940392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.940551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.940567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.940798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.940814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.940973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.940989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.941155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.755 [2024-11-28 08:26:48.941171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.755 qpair failed and we were unable to recover it. 00:28:06.755 [2024-11-28 08:26:48.941246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.756 [2024-11-28 08:26:48.941261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.756 qpair failed and we were unable to recover it. 00:28:06.756 [2024-11-28 08:26:48.941468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.756 [2024-11-28 08:26:48.941484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.756 qpair failed and we were unable to recover it. 00:28:06.756 [2024-11-28 08:26:48.941641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.756 [2024-11-28 08:26:48.941657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.756 qpair failed and we were unable to recover it. 00:28:06.756 [2024-11-28 08:26:48.941797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.756 [2024-11-28 08:26:48.941813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.756 qpair failed and we were unable to recover it. 00:28:06.756 [2024-11-28 08:26:48.942046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.756 [2024-11-28 08:26:48.942062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.756 qpair failed and we were unable to recover it. 00:28:06.756 [2024-11-28 08:26:48.942154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.756 [2024-11-28 08:26:48.942170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.756 qpair failed and we were unable to recover it. 00:28:06.756 [2024-11-28 08:26:48.942377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.756 [2024-11-28 08:26:48.942393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.756 qpair failed and we were unable to recover it. 00:28:06.756 [2024-11-28 08:26:48.942621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.756 [2024-11-28 08:26:48.942637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:06.756 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.942817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.942833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.943000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.943016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.943249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.943265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.943514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.943529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.943687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.943702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.943911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.943927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.944105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.944121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.944283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.944299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.944467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.944483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.944676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.944691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.944803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.944818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.944968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.944985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.945213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.945229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.945477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.945493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.945578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.945594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.945777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.945792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.945964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.945981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.946226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.946246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.946409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.946425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.946633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.946649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.946812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.946827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-11-28 08:26:48.947002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.040 [2024-11-28 08:26:48.947019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.947110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.947127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.947344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.947361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.947454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.947470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.947551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.947567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.947784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.947799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.948042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.948059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.948218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.948234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.948401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.948416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.948688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.948703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.948882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.948897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.949066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.949082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.949289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.949305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.949478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.949493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.949590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.949607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.949768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.949784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.949957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.949973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.950144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.950160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.950271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.950287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.950510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.950525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.950643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.950659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.950909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.950925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.951167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.951183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.951343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.951359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.951532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.951548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.951762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.951777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.952013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.952029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.952214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.952230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.952443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.952475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-11-28 08:26:48.952717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.041 [2024-11-28 08:26:48.952750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.952885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.952917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.953191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.953205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.953345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.953357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.953553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.953565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.953762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.953774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.953955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.953968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.954149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.954188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.954371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.954405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.954682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.954715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.954984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.955020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.955315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.955347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.955540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.955572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.955747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.955759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.955843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.955854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.956071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.956083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.956177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.956189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.956332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.956372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.956571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.956603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.956848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.956880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.957093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.957105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.957264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.957297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.957586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.957620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.957810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.957843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.958058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.958092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.958238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.958270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.958463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.958495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.958679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.958712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.042 qpair failed and we were unable to recover it. 00:28:07.042 [2024-11-28 08:26:48.958909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.042 [2024-11-28 08:26:48.958942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.959196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.959229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.959383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.959416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.959624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.959660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.959821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.959834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.960044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.960078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.960216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.960250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.960518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.960550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.960812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.960845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.961081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.961093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.961268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.961280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.961379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.961390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.961647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.961680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.961979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.962013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.962263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.962296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.962600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.962633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.962873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.962897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.963072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.963085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.963181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.963215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.963350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.963389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.963618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.963651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.963836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.963847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.964068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.964080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.964260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.964293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.964410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.964442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.964744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.964775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.964970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.964998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.965123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.965156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.965403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.965436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.043 [2024-11-28 08:26:48.965635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.043 [2024-11-28 08:26:48.965668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.043 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.965863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.965896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.966111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.966146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.966294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.966306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.966416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.966428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.966529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.966561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.966682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.966715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.966965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.966994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.967090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.967102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.967196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.967208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.967282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.967293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.967488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.967521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.967722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.967756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.967981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.968015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.968195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.968207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.968290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.968302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.968387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.968398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.968576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.968588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.968727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.968739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.968896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.968908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.969044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.969057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.969167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.969179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.969272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.969285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.969423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.969435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.969639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.969650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.969793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.969825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.970017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.970051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.970193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.970227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.970425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.970458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.044 qpair failed and we were unable to recover it. 00:28:07.044 [2024-11-28 08:26:48.970682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.044 [2024-11-28 08:26:48.970715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.970930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.970977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.971228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.971261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.971444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.971477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.971740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.971773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.971959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.971993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.972257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.972269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.972362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.972374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.972540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.972552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.972799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.972810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.973053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.973088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.973376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.973409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.973619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.973651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.973901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.973935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.974171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.974216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.974305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.974317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.974462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.974474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.974694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.974727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.974989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.975001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.975148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.975160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.975325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.975357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.975553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.975586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.975775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.975807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.975941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.975982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.976229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.976241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.976341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.976353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.976516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.976528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.976602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.976613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.045 [2024-11-28 08:26:48.976819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.045 [2024-11-28 08:26:48.976831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.045 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.977044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.977057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.977207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.977219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.977454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.977486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.977609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.977642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.977778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.977812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.978085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.978098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.978255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.978288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.978479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.978512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.978735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.978768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.978971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.979000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.979191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.979223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.979470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.979503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.979724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.979762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.980013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.980048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.980202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.980235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.980438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.980471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.980586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.980619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.980862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.980874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.981030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.981043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.981138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.981150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.981238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.981250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.981413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.981446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.981588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.981620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.046 [2024-11-28 08:26:48.981875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.046 [2024-11-28 08:26:48.981907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.046 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.982094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.982108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.982211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.982223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.982383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.982396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.982555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.982588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.982832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.982866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.983140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.983153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.983298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.983331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.983465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.983498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.983717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.983750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.983968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.983997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.984090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.984102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.984200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.984212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.984375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.984407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.984698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.984731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.984929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.984941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.985051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.985064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.985205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.985216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.985372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.985405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.985670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.985703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.985988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.986023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.986248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.986281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.986466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.986499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.986722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.986756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.987031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.987064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.987259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.987272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.987373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.987403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.987681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.987714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.987929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.987969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.988165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.988203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-11-28 08:26:48.988381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.047 [2024-11-28 08:26:48.988394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.988497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.988530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.988730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.988763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.989020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.989055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.989188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.989201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.989366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.989399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.989615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.989648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.989927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.989973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.990098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.990132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.990287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.990321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.990598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.990632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.990926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.990969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.991087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.991120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.991398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.991432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.991578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.991611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.991805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.991839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.992063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.992098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.992292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.992325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.992440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.992473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.992687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.992719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.992987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.992999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.993177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.993190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.993346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.993378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.993657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.993691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.993935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.993957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.994104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.994116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.994260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.994334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.994513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.994550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.994827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.994873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.995153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.995187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-11-28 08:26:48.995408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.048 [2024-11-28 08:26:48.995440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.995648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.995681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.995964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.995998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.996184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.996217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.996402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.996418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.996687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.996720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.996910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.996943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.997107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.997139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.997359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.997392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.997669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.997711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.997837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.997869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.998070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.998119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.998351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.998367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.998529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.998561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.998848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.998881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.999158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.999193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.999350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.999383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.999631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.999665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:48.999882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:48.999914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.000056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.000092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.000228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.000244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.000343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.000359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.000640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.000655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.000761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.000778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.000932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.000951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.001206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.001222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.001336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.001351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.001457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.001472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.001654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.001671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.001834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.001850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-11-28 08:26:49.002101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.049 [2024-11-28 08:26:49.002118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.002346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.002361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.002607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.002622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.002833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.002849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.003095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.003112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.003276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.003292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.003474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.003514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.003623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.003641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.003932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.003955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.004154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.004171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.004334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.004350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.004512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.004528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.004684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.004700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.004806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.004822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.005004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.005021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.005137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.005154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.005315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.005332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.005411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.005428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.005661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.005678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.005833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.005854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.005967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.005984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.006082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.006098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.006207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.006224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.006393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.006410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.006565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.006581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.006748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.006764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.007011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.007028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.007133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.007149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.007253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.050 [2024-11-28 08:26:49.007269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.050 qpair failed and we were unable to recover it. 00:28:07.050 [2024-11-28 08:26:49.007383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.007399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.007551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.007567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.007802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.007818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.007914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.007930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.008128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.008146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.008249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.008265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.008353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.008369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.008490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.008505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.008666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.008682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.008958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.008975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.009135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.009152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.009375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.009391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.009494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.009510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.009675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.009690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.009915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.009931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.010155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.010170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.010331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.010343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.010519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.010531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.010692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.010705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.010849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.010862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.010957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.010969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.011201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.011213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.011319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.011330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.011476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.011488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.011644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.011656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.011799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.011811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.012035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.012048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.012225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.012237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.051 qpair failed and we were unable to recover it. 00:28:07.051 [2024-11-28 08:26:49.012439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.051 [2024-11-28 08:26:49.012451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.012603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.012616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.012820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.012834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.013008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.013021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.013232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.013245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.013327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.013339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.013580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.013592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.013793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.013805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.014044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.014057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.014285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.014297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.014392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.014404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.014613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.014625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.014710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.014722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.014873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.014894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.015056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.015069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.015150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.015162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.015317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.015330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.015431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.015444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.015662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.015674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.015769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.015782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.015957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.015969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.016057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.016069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.016219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.016231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.016392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.052 [2024-11-28 08:26:49.016405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.052 qpair failed and we were unable to recover it. 00:28:07.052 [2024-11-28 08:26:49.016635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.016647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.016876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.016888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.017103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.017115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.017195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.017207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.017351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.017362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.017510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.017521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.017676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.017688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.017910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.017922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.018178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.018191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.018348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.018360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.018594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.018606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.018774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.018786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.018986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.018999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.019095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.019107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.019200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.019212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.019370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.019382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.019604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.019616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.019699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.019711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.019937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.019956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.020044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.020057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.020231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.020244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.020390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.020402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.020485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.020498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.020650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.020662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.020744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.020756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.020994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.021007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.021155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.021168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.053 [2024-11-28 08:26:49.021266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.053 [2024-11-28 08:26:49.021278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.053 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.021430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.021442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.021613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.021625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.021772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.021784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.021934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.021951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.022121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.022133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.022210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.022223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.022324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.022336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.022488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.022500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.022720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.022732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.022923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.022935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.023039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.023052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.023275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.023287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.023485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.023496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.023722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.023735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.023838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.023850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.024098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.024111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.024200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.024212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.024312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.024327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.024406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.024418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.024598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.024621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.024879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.024891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.025075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.025087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.025152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.025165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.025306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.025319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.025472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.025484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.025584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.025595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.025758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.025770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.026019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.026055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.054 qpair failed and we were unable to recover it. 00:28:07.054 [2024-11-28 08:26:49.026187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.054 [2024-11-28 08:26:49.026220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.026417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.026450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.026587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.026620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.026848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.026881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.027104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.027138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.027257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.027269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.027376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.027388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.027547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.027580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.027766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.027799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.028010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.028052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.028208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.028220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.028395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.028427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.028722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.028756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.028966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.029000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.029269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.029302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.029507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.029541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.029793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.029826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.029938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.029955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.030044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.030056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.030162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.030174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.030343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.030376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.030666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.030699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.030899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.030932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.031146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.031180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.031364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.031376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.031516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.031548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.031744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.031777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.031904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.055 [2024-11-28 08:26:49.031937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.055 qpair failed and we were unable to recover it. 00:28:07.055 [2024-11-28 08:26:49.032167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.032179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.032338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.032376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.032657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.032690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.032938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.032985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.033185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.033218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.033445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.033478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.033635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.033669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.033857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.033891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.034084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.034119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.034330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.034364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.034560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.034594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.034863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.034896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.035100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.035135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.035322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.035335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.035527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.035560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.035831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.035865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.036058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.036070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.036305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.036317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.036414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.036426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.036627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.036660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.036880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.036913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.037147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.037195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.037351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.037367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.037484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.037517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.037782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.037814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.038013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.038056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.038207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.038223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.038327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.038343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.038559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.038575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.056 qpair failed and we were unable to recover it. 00:28:07.056 [2024-11-28 08:26:49.038675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.056 [2024-11-28 08:26:49.038691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.038787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.038803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.038995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.039011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.039092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.039130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.039281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.039313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.039500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.039532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.039820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.039854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.040101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.040137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.040321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.040338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.040443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.040459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.040745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.040763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.040991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.041009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.041169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.041191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.041403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.041437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.041661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.041694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.041903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.041937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.042085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.042100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.042333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.042367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.042561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.042595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.042779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.042812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.043068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.043085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.043248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.043264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.043381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.043414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.043625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.043658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.043841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.043874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.044043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.044060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.044296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.044327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.044526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.044559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.057 [2024-11-28 08:26:49.044747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.057 [2024-11-28 08:26:49.044780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.057 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.044971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.045005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.045181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.045215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.045415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.045446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.045572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.045605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.045851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.045883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.046048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.046065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.046169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.046185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.046334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.046350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.046593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.046626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.046860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.046893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.047147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.047164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.047265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.047281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.047380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.047398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.047606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.047621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.047709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.047725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.047835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.047867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.048070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.048104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.048304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.048337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.048515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.048531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.048704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.048738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.048863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.048895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.049091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.049126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.049257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.049273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.049511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.058 [2024-11-28 08:26:49.049550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.058 qpair failed and we were unable to recover it. 00:28:07.058 [2024-11-28 08:26:49.049761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.049795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.050010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.050045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.050230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.050246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.050498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.050531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.050647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.050681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.050880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.050912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.051140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.051157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.051397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.051429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.051691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.051724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.051998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.052033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.052285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.052329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.052432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.052448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.052602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.052618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.052824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.052841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.053038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.053054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.053252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.053286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.053437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.053470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.053670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.053703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.053879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.053895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.054052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.054069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.054294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.054310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.054438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.054454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.054549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.054566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.054832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.054848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.055047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.055063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.055250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.055266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.055433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.055467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.055667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.055700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.059 [2024-11-28 08:26:49.055907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.059 [2024-11-28 08:26:49.055940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.059 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.056129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.056145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.056270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.056301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.056531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.056563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.056695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.056727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.056912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.056945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.057157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.057191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.057429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.057446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.057609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.057625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.057850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.057866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.058124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.058166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.058363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.058401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.058626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.058660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.058850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.058884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.059173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.059189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.059316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.059348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.059485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.059518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.059656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.059689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.059879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.059914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.060098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.060133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.060329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.060344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.060530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.060546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.060746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.060778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.060907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.060940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.061099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.061133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.061271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.061304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.061580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.061615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.061872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.061905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.062044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.062061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.060 [2024-11-28 08:26:49.062249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.060 [2024-11-28 08:26:49.062282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.060 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.062478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.062510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.062757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.062790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.063009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.063044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.063160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.063193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.063390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.063424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.063646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.063680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.063872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.063905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.064092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.064125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.064383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.064416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.064640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.064673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.064920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.064974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.065179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.065195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.065330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.065362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.065699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.065734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.066010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.066044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.066286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.066303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.066467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.066483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.066632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.066649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.066888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.066922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.067160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.067195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.067396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.067429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.067573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.067613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.067775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.067808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.068073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.068108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.068299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.068316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.068530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.068564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.068762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.068795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.061 [2024-11-28 08:26:49.069047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.061 [2024-11-28 08:26:49.069083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.061 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.069280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.069314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.069587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.069620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.069915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.069956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.070150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.070166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.070326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.070342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.070469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.070486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.070772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.070789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.070984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.071001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.071157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.071174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.071275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.071291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.071446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.071462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.071557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.071574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.071687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.071703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.071926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.071942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.072170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.072204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.072402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.072436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.072647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.072680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.072893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.072927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.073229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.073246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.073405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.073422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.073693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.073709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.073867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.073884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.074065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.074082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.074220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.074254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.074507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.074541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.074807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.074840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.075119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.075153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.075340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.075373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.062 [2024-11-28 08:26:49.075649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.062 [2024-11-28 08:26:49.075682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.062 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.075973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.076007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.076135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.076169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.076349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.076382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.076507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.076541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.076811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.076849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.076994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.077030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.077214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.077247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.077368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.077385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.077495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.077512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.077745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.077761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.077979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.077996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.078273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.078305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.078484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.078518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.078775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.078809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.079005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.079039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.079285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.079319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.079606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.079623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.079856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.079873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.080046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.080064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.080240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.080273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.080403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.080436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.080659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.080692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.080976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.081011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.081233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.081266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.081394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.081428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.081721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.081754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.081969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.082003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.082250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.082283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.063 [2024-11-28 08:26:49.082404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.063 [2024-11-28 08:26:49.082420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.063 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.082639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.082655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.082883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.082917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.083134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.083169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.083356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.083390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.083641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.083675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.083861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.083893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.084032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.084066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.084313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.084345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.084564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.084581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.084844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.084877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.085106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.085141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.085402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.085419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.085663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.085679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.085917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.085933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.086126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.086142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.086316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.086337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.086568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.086583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.086800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.086832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.087014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.087048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.087240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.087272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.087507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.087523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.087684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.087700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.087872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.087888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.088070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.088087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.064 qpair failed and we were unable to recover it. 00:28:07.064 [2024-11-28 08:26:49.088328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.064 [2024-11-28 08:26:49.088344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.088625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.088657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.088961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.088996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.089179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.089221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.089370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.089386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.089569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.089602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.089873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.089906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.090168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.090203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.090502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.090535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.090802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.090835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.091060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.091095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.091297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.091332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.091618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.091652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.091917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.091968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.092129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.092146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.092226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.092242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.092384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.092401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.092532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.092564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.092736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.092813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.093114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.093153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.093444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.093477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.093708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.093742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.094014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.094047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.094245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.094279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.094451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.094467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.094728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.094761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.095010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.095044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.095303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.095336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.095613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.095646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.065 [2024-11-28 08:26:49.095895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.065 [2024-11-28 08:26:49.095927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.065 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.096198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.096231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.096439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.096471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.096652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.096669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.096845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.096877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.097088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.097123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.097336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.097368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.097616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.097632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.097872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.097889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.098049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.098066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.098325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.098341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.098633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.098666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.098944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.098985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.099177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.099210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.099401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.099418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.099697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.099730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.100002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.100041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.100288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.100304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.100541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.100557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.100716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.100731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.100877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.100893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.101134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.101168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.101376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.101408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.101609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.101641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.101845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.101879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.102061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.102109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.102402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.102418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.102629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.102645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.102829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.102845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.103084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.103119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.103309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.103342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.103592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.103607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.066 [2024-11-28 08:26:49.103822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.066 [2024-11-28 08:26:49.103856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.066 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.104057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.104091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.104280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.104313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.104599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.104632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.104840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.104873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.105152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.105169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.105408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.105425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.105518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.105534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.105713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.105745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.105944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.105992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.106192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.106238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.106448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.106466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.106711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.106727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.106836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.106852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.107102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.107137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.107356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.107389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.107616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.107648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.107898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.107931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.108158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.108192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.108318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.108335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.108430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.108447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.108598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.108615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.108692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.108726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.108966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.109000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.109115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.109148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.109409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.109425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.109516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.109551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.109766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.109799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.110081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.110116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.110299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.110332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.110590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.110606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.067 qpair failed and we were unable to recover it. 00:28:07.067 [2024-11-28 08:26:49.110821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.067 [2024-11-28 08:26:49.110855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.111121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.111155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.111288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.111321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.111590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.111624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.111888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.111921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.112191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.112225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.112408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.112424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.112636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.112651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.112818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.112850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.113041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.113075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.113273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.113312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.113554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.113570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.113731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.113747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.113998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.114015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.114184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.114200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.114416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.114449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.114723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.114755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.115036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.115071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.115347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.115381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.115499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.115532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.115780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.115813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.116099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.116140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.116349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.116382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.116630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.116646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.116863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.116896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.117175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.117210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.117431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.117465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.117656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.117690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.117968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.118002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.118237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.118271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.068 [2024-11-28 08:26:49.118544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.068 [2024-11-28 08:26:49.118577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.068 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.118852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.118884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.119122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.119156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.119424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.119440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.119678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.119694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.119868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.119884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.120125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.120160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.120306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.120338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.120598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.120637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.120849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.120865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.121078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.121095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.121335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.121352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.121457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.121473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.121663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.121696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.121904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.121938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.122230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.122263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.122557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.122590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.122887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.122921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.123161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.123194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.123457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.123474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.123685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.123701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.123937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.123958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.124119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.124135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.124228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.124245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.124387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.124404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.124623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.124655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.124866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.124899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.125107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.125142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.125443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.125476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.125678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.125695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.069 qpair failed and we were unable to recover it. 00:28:07.069 [2024-11-28 08:26:49.125928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.069 [2024-11-28 08:26:49.125945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.126035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.126052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.126248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.126264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.126508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.126541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.126821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.126854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.127059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.127093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.127281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.127314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.127578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.127595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.127695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.127711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.127867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.127883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.128029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.128047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.128127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.128143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.128375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.128391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.128634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.128651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.128861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.128877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.129102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.129137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.129325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.129341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.129377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99fb20 (9): Bad file descriptor 00:28:07.070 [2024-11-28 08:26:49.129655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.129689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.129978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.130018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.130264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.130298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.130493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.130533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.130764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.130776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.130958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.130970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.131115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.070 [2024-11-28 08:26:49.131148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.070 qpair failed and we were unable to recover it. 00:28:07.070 [2024-11-28 08:26:49.131399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.131432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.131714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.131747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.132069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.132104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.132334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.132367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.132568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.132600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.132744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.132779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.133029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.133063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.133348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.133359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.133593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.133629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.133837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.133870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.134150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.134185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.134436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.134449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.134660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.134672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.134915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.134927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.135204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.135239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.135435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.135468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.135735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.135747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.135955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.135968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.136175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.136188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.136404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.136437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.136666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.136700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.136989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.137024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.137302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.137335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.137587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.137620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.137813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.137846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.138068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.138103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.138303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.138315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.138468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.138500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.138702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.138735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.138984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.139018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.071 [2024-11-28 08:26:49.139227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.071 [2024-11-28 08:26:49.139261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.071 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.139526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.139565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.139754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.139787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.140075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.140110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.140344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.140377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.140532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.140566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.140823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.140835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.141010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.141022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.141199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.141232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.141432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.141444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.141676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.141709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.141964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.141999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.142186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.142198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.142340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.142352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.142589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.142622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.142826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.142860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.143114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.143148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.143391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.143403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.143668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.143700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.143969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.144003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.144212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.144244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.144437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.144470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.144722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.144754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.144936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.144983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.145171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.145205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.145451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.145463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.145629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.145641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.145815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.145848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.146058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.146093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.146399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.072 [2024-11-28 08:26:49.146432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.072 qpair failed and we were unable to recover it. 00:28:07.072 [2024-11-28 08:26:49.146724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.146757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.147036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.147071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.147259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.147293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.147437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.147448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.147593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.147629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.147908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.147942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.148093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.148126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.148307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.148320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.148546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.148579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.148766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.148800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.148987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.149022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.149300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.149340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.149638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.149671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.149872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.149906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.150172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.150208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.150461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.150488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.150744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.150778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.151038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.151073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.151276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.151310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.151520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.151532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.151761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.151773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.152045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.152080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.152343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.152376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.152621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.152633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.152792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.152805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.153011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.153025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.153255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.153267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.153432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.153465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.153733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.153765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.154055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.073 [2024-11-28 08:26:49.154089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.073 qpair failed and we were unable to recover it. 00:28:07.073 [2024-11-28 08:26:49.154368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.154401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.154596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.154608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.154830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.154863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.155146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.155181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.155411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.155423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.155638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.155671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.155939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.155983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.156235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.156268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.156525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.156559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.156855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.156888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.157091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.157125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.157269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.157302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.157502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.157535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.157738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.157772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.158027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.158061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.158339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.158373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.158590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.158623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.158848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.158860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.159008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.159021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.159279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.159312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.159589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.159622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.159829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.159867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.160069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.160103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.160355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.160368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.160577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.160610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.160831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.160864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.161139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.161174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.161419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.161452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.161657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.161690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.161968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.162003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.162138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.074 [2024-11-28 08:26:49.162172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.074 qpair failed and we were unable to recover it. 00:28:07.074 [2024-11-28 08:26:49.162447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.162481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.162700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.162733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.162884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.162918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.163221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.163296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.163582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.163621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.163904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.163937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.164234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.164269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.164536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.164578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.164743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.164759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.164867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.164900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.165194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.165228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.165429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.165463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.165733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.165766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.166065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.166101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.166304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.166338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.166590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.166624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.166923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.166978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.167258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.167304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.167577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.167610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.167892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.167925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.168215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.168249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.168522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.168556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.168805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.168822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.169037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.169055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.169228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.075 [2024-11-28 08:26:49.169260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.075 qpair failed and we were unable to recover it. 00:28:07.075 [2024-11-28 08:26:49.169523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.169556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.169752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.169769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.170018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.170052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.170250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.170283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.170535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.170568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.170762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.170795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.171085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.171119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.171398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.171414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.171648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.171665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.171907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.171923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.172033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.172051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.172287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.172320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.172576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.172610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.172832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.172864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.173064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.173098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.173358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.173391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.173637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.173655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.173889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.173905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.174138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.174155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.174399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.174415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.174581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.174598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.174773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.174790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.175031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.175065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.175318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.175351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.175608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.175641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.175822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.175856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.176041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.176077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.176274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.176307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.176536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.176569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.176873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.076 [2024-11-28 08:26:49.176906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.076 qpair failed and we were unable to recover it. 00:28:07.076 [2024-11-28 08:26:49.177120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.177155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.177423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.177456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.177646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.177679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.177977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.178012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.178288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.178321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.178597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.178615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.178709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.178726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.178871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.178887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.179051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.179068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.179247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.179279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.179551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.179584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.179867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.179884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.180045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.180062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.180227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.180244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.180398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.180440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.180716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.180748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.180982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.181016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.181228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.181262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.181544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.181576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.181878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.181911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.182180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.182214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.182475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.182515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.182678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.182694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.182804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.182835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.183123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.183158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.183366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.183399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.183652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.183686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.183940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.183963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.184191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.184208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.077 [2024-11-28 08:26:49.184429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.077 [2024-11-28 08:26:49.184462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.077 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.184723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.184761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.184972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.185007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.185219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.185252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.185593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.185625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.185828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.185862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.186144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.186180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.186333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.186350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.186601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.186634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.186833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.186866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.187067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.187102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.187399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.187433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.187711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.187744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.188024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.188059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.188263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.188297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.188509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.188544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.188747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.188764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.188967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.189003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.189276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.189310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.189498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.189531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.189816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.189833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.190076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.190093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.190243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.190260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.190453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.190486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.190676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.190710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.190974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.191009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.191139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.191173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.191311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.191345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.191607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.191655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.191825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.191842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.078 [2024-11-28 08:26:49.192085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.078 [2024-11-28 08:26:49.192120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.078 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.192409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.192446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.192652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.192668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.192882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.192899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.193200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.193235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.193522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.193556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.193821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.193853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.194117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.194152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.194337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.194369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.194644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.194660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.194819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.194835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.195008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.195043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.195323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.195361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.195555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.195572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.195793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.195826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.196127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.196162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.196380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.196397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.196618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.196652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.196848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.196881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.197161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.197197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.197480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.197514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.197763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.197780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.198063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.198098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.198303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.198336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.198537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.198569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.198771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.198788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.199031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.199049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.199268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.199284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.199474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.199507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.199710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.199744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.199865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.199898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.079 [2024-11-28 08:26:49.200133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.079 [2024-11-28 08:26:49.200167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.079 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.200306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.200338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.200636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.200670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.200971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.200989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.201148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.201165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.201354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.201386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.201522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.201554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.201784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.201816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.202070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.202111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.202412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.202445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.202720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.202753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.203040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.203075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.203213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.203246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.203523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.203555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.203818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.203851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.204069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.204104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.204361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.204395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.204600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.204633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.204905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.204938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.205251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.205284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.205562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.205597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.205798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.205832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.205984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.206020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.206161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.206198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.206447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.206480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.206650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.206667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.206928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.206945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.207191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.207208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.207422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.207439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.207598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.207614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.080 [2024-11-28 08:26:49.207873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.080 [2024-11-28 08:26:49.207890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.080 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.208057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.208075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.208248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.208265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.208446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.208480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.208772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.208804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.209081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.209117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.209427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.209460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.209711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.209728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.209879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.209895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.210117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.210152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.210432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.210473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.210739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.210775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.210979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.211013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.211293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.211327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.211605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.211622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.211865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.211882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.212032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.212049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.212215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.212249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.212532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.212565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.212835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.212872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.213087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.213121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.213437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.213482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.213714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.213731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.213884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.213901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.214097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.214132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.214354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.214388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.214592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.214625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.081 qpair failed and we were unable to recover it. 00:28:07.081 [2024-11-28 08:26:49.214881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.081 [2024-11-28 08:26:49.214915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.215130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.215164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.215445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.215479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.215715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.215748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.216003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.216021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.216239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.216256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.216545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.216578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.216775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.216808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.217008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.217044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.217234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.217268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.217457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.217490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.217768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.217801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.218078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.218096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.218258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.218274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.218421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.218438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.218599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.218639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.218918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.218960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.219235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.219269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.219507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.219540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.219773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.219795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.219960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.219978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.220145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.220162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.220415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.220448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.220733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.220766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.221045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.221063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.221299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.221316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.221499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.221516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.221691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.221724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.222024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.222058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.222250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.222283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.222469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.222486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.222735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.222767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.222979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.223014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.223232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.223266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.082 qpair failed and we were unable to recover it. 00:28:07.082 [2024-11-28 08:26:49.223478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.082 [2024-11-28 08:26:49.223511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.223796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.223830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.224110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.224128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.224284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.224318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.224519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.224551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.224750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.224784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.225004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.225039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.225241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.225275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.225532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.225565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.225837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.225871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.226163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.226198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.226426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.226459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.226650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.226685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.226839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.226873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.227003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.227037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.227258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.227292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.227521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.227555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.227787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.227822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.228013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.228049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.228336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.228370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.228674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.228691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.228982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.229017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.229276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.229310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.229534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.229568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.229850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.229884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.230170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.230205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.230490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.230531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.230819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.230837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.231001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.231019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.231263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.231280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.231582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.231599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.231848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.231888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.232220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.083 [2024-11-28 08:26:49.232257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.083 qpair failed and we were unable to recover it. 00:28:07.083 [2024-11-28 08:26:49.232402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.232435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.232709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.232725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.232945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.232972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.233192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.233209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.233401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.233419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.233582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.233599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.233776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.233793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.234045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.234081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.234280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.234312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.234572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.234606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.234793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.234826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.235027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.235062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.235345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.235379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.235565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.235599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.235864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.235898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.236195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.236230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.236503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.236538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.236733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.236767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.237035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.237070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.237356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.237390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.237676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.237715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.237959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.237995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.238230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.238264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.238550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.238584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.238868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.238885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.239091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.239109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.239380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.239397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.239493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.239510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.239737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.239771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.240057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.240092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.240400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.240435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.240720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.240737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.240902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.084 [2024-11-28 08:26:49.240919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.084 qpair failed and we were unable to recover it. 00:28:07.084 [2024-11-28 08:26:49.241082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.241100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.241370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.241449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.241765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.241803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.242129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.242166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.242456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.242491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.242696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.242729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.243017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.243053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.243338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.243372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.243630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.243663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.243921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.243963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.244230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.244266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.244529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.244563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.244850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.244863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.245077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.245090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.245353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.245370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.245583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.245595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.245760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.245773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.245981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.245995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.246182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.246217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.246537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.246571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.246712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.246747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.247023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.247072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.247373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.247407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.247543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.247577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.247685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.247698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.247860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.247873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.248169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.248205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.248363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.248398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.248604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.248617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.248868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.248902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.249104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.249139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.249423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.249461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.249677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.249691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.085 qpair failed and we were unable to recover it. 00:28:07.085 [2024-11-28 08:26:49.249870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.085 [2024-11-28 08:26:49.249883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.250106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.250142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.250407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.250441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.250734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.250769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.250981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.251017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.251220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.251254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.251524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.251557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.251872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.251906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.252127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.252163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.252359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.252393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.252630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.252643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.252799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.252811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.253021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.253034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.253263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.253296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.253552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.253586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.253876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.253888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.254141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.254169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.254432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.254467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.254690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.254723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.254981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.255022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.255165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.255178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.255365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.255404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.255664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.255699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.255824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.255864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.256036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.256049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.256294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.256327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.256587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.256621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.256832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.256864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.257131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.257168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.257444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.257486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.257663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.257676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.086 qpair failed and we were unable to recover it. 00:28:07.086 [2024-11-28 08:26:49.257893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.086 [2024-11-28 08:26:49.257926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.258165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.258200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.258486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.258528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.258806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.258839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.259062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.259099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.259242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.259276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.259560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.259594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.259822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.259857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.260143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.260156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.260395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.260408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.260693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.260727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.260920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.260977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.261241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.261277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.261556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.261591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.261800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.261836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.262086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.262100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.262337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.262350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.262512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.262525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.262770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.262805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.262967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.263003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.263288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.263323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.263444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.263479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.263691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.263726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.263922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.263935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.264053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.264089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.264283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.264318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.264568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.264611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.264759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.264771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.264961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.264997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.265281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.265315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.265592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.265633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.265921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.265967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.266254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.266288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.087 qpair failed and we were unable to recover it. 00:28:07.087 [2024-11-28 08:26:49.266486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.087 [2024-11-28 08:26:49.266520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.266804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.266838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.267101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.267136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.267399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.267434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.267607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.267620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.267760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.267772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.267973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.268009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.268214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.268248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.268404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.268437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.268725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.268758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.268944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.268974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.269216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.269251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.269560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.269594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.269884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.269918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.270215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.270249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.270454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.270488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.270705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.270739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.271038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.271052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.271296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.271327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.271604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.271638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.271930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.271972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.272276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.272310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.272470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.272504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.272694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.272728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.272944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.272989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.273270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.273304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.273584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.273636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.273907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.273934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.274221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.274256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.274569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.274602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.274804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.274837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.274998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.275034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.275315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.275349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.275628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.275662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.088 qpair failed and we were unable to recover it. 00:28:07.088 [2024-11-28 08:26:49.275957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.088 [2024-11-28 08:26:49.275970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.276251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.276264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.276475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.276488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.276724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.276753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.277004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.277018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.277178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.277191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.277345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.277358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.277581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.277593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.277816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.277829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.278063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.278076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.278320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.278354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.278545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.278579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.278760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.278772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.279009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.279022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.279246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.279280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.279540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.279574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.279766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.279779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.280021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.280035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.280287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.280300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.280457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.280469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.280707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.280720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.280869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.280883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.281103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.281117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.281279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.281313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.281516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.281529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.281691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.281725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.282011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.282046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.282256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.282290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.282557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.282590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.282852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.282887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.089 [2024-11-28 08:26:49.283029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.089 [2024-11-28 08:26:49.283044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.089 qpair failed and we were unable to recover it. 00:28:07.369 [2024-11-28 08:26:49.283301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.369 [2024-11-28 08:26:49.283314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-11-28 08:26:49.283482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.369 [2024-11-28 08:26:49.283496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-11-28 08:26:49.283728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.369 [2024-11-28 08:26:49.283742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-11-28 08:26:49.283831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.369 [2024-11-28 08:26:49.283844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-11-28 08:26:49.284082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.369 [2024-11-28 08:26:49.284095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.369 qpair failed and we were unable to recover it. 00:28:07.369 [2024-11-28 08:26:49.284259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.284273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.284379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.284392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.284629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.284642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.284803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.284815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.285059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.285072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.285227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.285240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.285424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.285438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.285697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.285710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.285878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.285891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.286039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.286053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.286208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.286220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.286459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.286472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.286732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.286745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.286889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.286901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.287147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.287161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.287259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.287272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.287505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.287518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.287612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.287625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.287765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.287778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.287940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.287958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.288140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.288154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.288408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.288422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.288657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.288669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.288830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.288843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.289082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.289096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.289326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.289361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.289669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.289703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.289892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.289925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.290124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.290158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.370 [2024-11-28 08:26:49.290422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.370 [2024-11-28 08:26:49.290456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.370 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.290668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.290702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.290890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.290903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.291152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.291188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.291425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.291459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.291742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.291757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.292008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.292022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.292283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.292321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.292640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.292674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.292932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.292990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.293142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.293176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.293389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.293423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.293624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.293658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.293973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.294008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.294290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.294325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.294628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.294663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.294895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.294929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.295150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.295185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.295315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.295348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.295615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.295650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.295927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.295975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.296255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.296289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.296591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.296624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.296904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.296917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.297090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.297103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.297273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.297307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.297570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.297605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.297815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.297848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.298112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.298147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.298375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.298409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.298673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.371 [2024-11-28 08:26:49.298707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.371 qpair failed and we were unable to recover it. 00:28:07.371 [2024-11-28 08:26:49.298849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.298862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.299056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.299070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.299160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.299172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.299327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.299340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.299498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.299511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.299771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.299806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.300084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.300120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.300412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.300445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.300724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.300757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.301044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.301058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.301269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.301282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.301521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.301534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.301697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.301710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.301793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.301806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.302047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.302084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.302395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.302429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.302663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.302697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.303023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.303059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.303390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.303424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.303627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.303661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.303872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.303885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.304161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.304174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.304283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.304295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.304383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.304396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.304564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.304577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.304684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.304697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.304940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.304964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.305111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.305145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.305462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.372 [2024-11-28 08:26:49.305497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.372 qpair failed and we were unable to recover it. 00:28:07.372 [2024-11-28 08:26:49.305776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.305809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.306086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.306120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.306388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.306422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.306714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.306747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.307025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.307061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.307347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.307382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.307579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.307613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.307771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.307805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.308029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.308043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.308310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.308343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.308622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.308656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.308881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.308923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.309083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.309097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.309200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.309224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.309366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.309379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.309468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.309501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.309660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.309694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.309979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.310023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.310306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.310320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.310467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.310480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.310730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.310763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.311074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.311109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.311300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.311334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.311621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.311655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.311806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.311840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.312051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.312092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.312282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.312316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.312530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.312563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.312890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.312923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.373 [2024-11-28 08:26:49.313139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.373 [2024-11-28 08:26:49.313173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.373 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.313376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.313411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.313621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.313655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.313869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.313881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.314148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.314184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.314382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.314415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.314634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.314647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.314874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.314908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.315182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.315217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.315337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.315371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.315575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.315609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.315888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.315922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.316204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.316239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.316528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.316562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.316707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.316740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.316940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.316989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.317223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.317236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.317323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.317336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.317516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.317549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.317738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.317772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.318063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.318099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.318293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.318327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.318597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.318640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.318855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.318868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.319030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.319044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.319200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.319212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.319401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.319433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.319659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.319693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.319961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.374 [2024-11-28 08:26:49.319992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.374 qpair failed and we were unable to recover it. 00:28:07.374 [2024-11-28 08:26:49.320137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.320150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.320381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.320393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.320582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.320594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.320858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.320871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.321049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.321063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.321208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.321220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.321500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.321533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.321829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.321867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.322071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.322084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.322240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.322253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.322416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.322430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.322589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.322622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.322841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.322875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.323145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.323179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.323370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.323404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.323691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.323725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.324043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.324078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.324358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.324391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.324683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.324716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.324905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.324937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.325221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.325255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.325546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.325579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.325804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.325838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.375 qpair failed and we were unable to recover it. 00:28:07.375 [2024-11-28 08:26:49.326115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.375 [2024-11-28 08:26:49.326129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.326290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.326323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.326611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.326645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.326942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.326960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.327238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.327251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.327464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.327477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.327717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.327730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.327916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.327928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.328091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.328105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.328271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.328285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.328501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.328513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.328664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.328676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.328937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.328983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.329220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.329254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.329562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.329596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.329833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.329866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.330006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.330042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.330288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.330301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.330462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.330475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.330700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.330714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.330869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.330902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.331244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.331279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.331569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.331602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.331923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.331935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.332127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.332143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.332310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.332343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.332560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.332594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.332820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.332853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.333142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.333177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.333397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.333431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.376 qpair failed and we were unable to recover it. 00:28:07.376 [2024-11-28 08:26:49.333720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.376 [2024-11-28 08:26:49.333753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.333903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.333953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.334120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.334133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.334276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.334288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.334543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.334577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.334883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.334915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.335215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.335228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.335392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.335405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.335649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.335662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.335924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.335937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.336101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.336115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.336276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.336289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.336517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.336529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.336643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.336656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.336900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.336933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.337266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.337300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.337588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.337624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.337904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.337938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.338167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.338201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.338526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.338560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.338852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.338886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.339160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.339196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.339485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.339518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.339798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.339810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.340043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.340078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.340393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.340426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.340665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.340699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.340913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.340973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.341234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.341268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.341570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.341604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.377 qpair failed and we were unable to recover it. 00:28:07.377 [2024-11-28 08:26:49.341875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.377 [2024-11-28 08:26:49.341909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.342120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.342133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.342279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.342292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.342504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.342517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.342661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.342676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.342900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.342934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.343231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.343265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.343474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.343507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.343714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.343747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.343959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.343989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.344166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.344178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.344340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.344373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.344663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.344696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.344910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.344944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.345167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.345201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.345465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.345499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.345707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.345740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.346016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.346065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.346362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.346396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.346645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.346679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.346897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.346931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.347139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.347174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.347484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.347517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.347799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.347833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.348036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.348050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.348264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.348277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.348432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.348444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.348695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.348729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.348995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.378 [2024-11-28 08:26:49.349008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.378 qpair failed and we were unable to recover it. 00:28:07.378 [2024-11-28 08:26:49.349157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.349191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.349476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.349510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.349796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.349830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.350116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.350152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.350384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.350419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.350697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.350731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.351040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.351075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.351353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.351367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.351670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.351705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.351996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.352031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.352259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.352294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.352556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.352590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.352882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.352895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.353058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.353071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.353294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.353329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.353602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.353642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.353926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.353972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.354267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.354302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.354494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.354527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.354807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.354841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.355057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.355094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.355252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.355286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.355497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.355530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.355665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.355699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.355916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.355968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.356148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.356161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.356335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.356368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.356566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.356599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.379 [2024-11-28 08:26:49.356912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.379 [2024-11-28 08:26:49.356925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.379 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.357126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.357139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.357301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.357314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.357556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.357589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.357847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.357881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.358190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.358203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.358424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.358437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.358675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.358688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.358877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.358891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.359106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.359119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.359223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.359237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.359311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.359325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.359470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.359483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.359706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.359740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.360108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.360185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.360456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.360476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.360637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.360654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.360819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.360836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.361086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.361108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.361354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.361372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.361590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.361606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.361758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.361775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.361977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.362013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.362208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.362243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.362552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.362587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.362875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.362909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.363185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.363202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.363307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.363330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.363493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.363510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.380 [2024-11-28 08:26:49.363757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.380 [2024-11-28 08:26:49.363774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.380 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.363994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.364030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.364286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.364320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.364623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.364657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.364907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.364942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.365218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.365252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.365396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.365431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.365714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.365748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.366036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.366072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.366370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.366405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.366675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.366709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.366995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.367034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.367311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.367346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.367543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.367578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.367800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.367835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.368036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.368072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.368365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.368382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.368470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.368488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.368717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.368751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.368973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.369009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.369205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.369240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.369523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.369557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.369789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.369823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.370099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.370116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.370366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.370403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.381 [2024-11-28 08:26:49.370724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.381 [2024-11-28 08:26:49.370759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.381 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.371055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.371091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.371365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.371400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.371713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.371747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.371961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.371997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.372262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.372299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.372592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.372627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.372833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.372866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.373140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.373157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.373378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.373396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.373622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.373639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.373837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.373854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.374030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.374048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.374292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.374312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.374538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.374555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.374717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.374751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.374944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.375003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.375214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.375248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.375536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.375570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.375837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.375872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.376133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.376150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.376395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.376428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.376653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.376687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.376837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.376870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.377076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.377094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.377302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.377336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.377568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.377602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.377755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.377790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.378051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.378088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.378381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.378416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.382 qpair failed and we were unable to recover it. 00:28:07.382 [2024-11-28 08:26:49.378615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.382 [2024-11-28 08:26:49.378650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.378856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.378891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.379134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.379171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.379458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.379493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.379720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.379753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.380060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.380096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.380258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.380292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.380578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.380611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.380902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.380943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.381198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.381235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.381583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.381664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.381974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.382055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.382413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.382494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.382785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.382824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.383123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.383164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.383328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.383362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.383560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.383595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.383856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.383891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.384039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.384074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.384352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.384386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.384580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.384614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.384877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.384917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.385095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.385112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.385343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.385376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.385603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.385637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.385846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.385879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.386187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.386223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.386509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.386542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.386857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.383 [2024-11-28 08:26:49.386891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.383 qpair failed and we were unable to recover it. 00:28:07.383 [2024-11-28 08:26:49.387201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.387219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.387457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.387474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.387730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.387764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.388000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.388036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.388252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.388286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.388598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.388631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.388820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.388839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.388996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.389014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.389222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.389261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.389496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.389529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.389763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.389797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.390089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.390126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.390359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.390392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.390694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.390728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.391000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.391036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.391267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.391301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.391517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.391551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.391842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.391876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.392153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.392188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.392401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.392434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.392746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.392779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.393071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.393106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.393405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.393439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.393636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.393670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.393898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.393932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.394231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.394250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.394501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.394534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.394728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.394762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.395028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.395064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.395349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.395381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.395648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.384 [2024-11-28 08:26:49.395681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.384 qpair failed and we were unable to recover it. 00:28:07.384 [2024-11-28 08:26:49.395909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.395942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.396217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.396251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.396537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.396571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.396860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.396893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.397176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.397216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.397413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.397430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.397597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.397631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.397818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.397836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.397967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.397985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.398269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.398302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.398573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.398606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.398846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.398880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.399165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.399201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.399485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.399519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.399743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.399777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.399914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.399956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.400246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.400279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.400535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.400569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.400835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.400869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.401157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.401192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.401501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.401535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.401846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.401879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.402159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.402199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.402360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.402376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.402568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.402585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.402787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.402820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.403048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.403084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.403288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.403322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.403586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.403620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.403877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.403911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.385 [2024-11-28 08:26:49.404062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.385 [2024-11-28 08:26:49.404097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.385 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.404405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.404423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.404613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.404630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.404802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.404818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.405054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.405089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.405308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.405341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.405601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.405635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.405882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.405915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.406141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.406160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.406389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.406422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.406737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.406772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.407037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.407072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.407366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.407400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.407715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.407750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.407964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.408000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.408238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.408278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.408502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.408536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.408763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.408798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.408938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.408961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.409163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.409196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.409422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.409455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.409692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.409726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.410018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.410054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.410333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.410367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.410519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.410553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.410772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.410805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.411091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.411128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.411409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.411426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.411664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.411681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.386 qpair failed and we were unable to recover it. 00:28:07.386 [2024-11-28 08:26:49.411928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.386 [2024-11-28 08:26:49.411945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.412129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.412147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.412440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.412474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.412738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.412771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.412969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.413006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.413222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.413238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.413421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.413454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.413606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.413640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.413865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.413899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.414042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.414071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.414315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.414333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.414581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.414597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.414836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.414853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.415082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.415100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.415297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.415315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.415481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.415497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.415748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.415782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.415918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.415936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.416103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.416120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.416323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.416357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.416591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.416624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.416833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.416867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.417132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.417168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.417477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.417511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.417802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.387 [2024-11-28 08:26:49.417837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.387 qpair failed and we were unable to recover it. 00:28:07.387 [2024-11-28 08:26:49.418085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.418103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.418342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.418359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.418518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.418536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.418786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.418821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.419051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.419069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.419307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.419341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.419601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.419635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.419945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.419991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.420277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.420312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.420589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.420622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.420918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.420961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.421225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.421260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.421557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.421591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.421803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.421836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.422040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.422088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.422335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.422366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.422598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.422632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.422777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.422811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.423072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.423107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.423332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.423366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.423604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.423637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.423927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.423971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.424239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.424273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.424534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.424567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.424870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.424904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.425177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.425213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.425507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.425541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.425784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.425817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.426020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.426038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.426217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.426257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.426451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.388 [2024-11-28 08:26:49.426485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.388 qpair failed and we were unable to recover it. 00:28:07.388 [2024-11-28 08:26:49.426778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.426812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.427124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.427160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.427453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.427488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.427698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.427731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.428022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.428058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.428351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.428368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.428592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.428609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.428862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.428901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.429174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.429209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.429474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.429508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.429799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.429833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.430115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.430151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.430437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.430454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.430628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.430646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.430822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.430839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.431019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.431036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.431209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.431227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.431405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.431438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.431569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.431603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.431866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.431899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.432170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.432206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.432500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.432534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.432848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.432883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.433108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.433144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.433336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.433371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.433582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.433615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.433818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.433853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.434158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.434176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.434365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.389 [2024-11-28 08:26:49.434382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.389 qpair failed and we were unable to recover it. 00:28:07.389 [2024-11-28 08:26:49.434661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.434695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.434913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.434959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.435183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.435201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.435455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.435488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.435772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.435806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.436061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.436079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.436305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.436323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.436546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.436563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.436816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.436857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.437177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.437213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.437513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.437554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.437843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.437877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.438078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.438114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.438312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.438346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.438577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.438611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.438876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.438909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.439111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.439129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.439403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.439435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.439720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.439753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.439959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.439976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.440199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.440217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.440439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.440456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.440726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.440744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.440986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.441004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.441181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.441198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.441386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.441419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.441577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.441610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.441892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.441926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.442222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.390 [2024-11-28 08:26:49.442257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.390 qpair failed and we were unable to recover it. 00:28:07.390 [2024-11-28 08:26:49.442527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.442545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.442727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.442744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.442970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.442989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.443181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.443214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.443438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.443471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.443700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.443734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.443885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.443919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.444216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.444250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.444453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.444473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.444637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.444670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.444892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.444925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.445199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.445234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.445445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.445479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.445689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.445723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.445994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.446031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.446241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.446275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.446481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.446499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.446655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.446672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.446865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.446900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.447142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.447178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.447489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.447522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.447725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.447760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.447966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.448003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.448250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.448267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.448422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.448438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.448552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.448569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.448814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.448847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.449052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.449088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.391 qpair failed and we were unable to recover it. 00:28:07.391 [2024-11-28 08:26:49.449299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.391 [2024-11-28 08:26:49.449332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.449612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.449647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.449841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.449882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.450130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.450148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.450379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.450396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.450618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.450635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.450923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.450967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.451261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.451296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.451560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.451594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.451861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.451909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.452181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.452199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.452374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.452391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.452501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.452517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.452782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.452816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.453104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.453140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.453345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.453379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.453664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.453697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.453980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.454016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.454302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.454336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.454621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.454655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.454972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.455008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.455275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.455295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.455470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.455487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.455663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.455698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.456020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.456056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.456327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.456362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.456660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.456695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.456969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.457004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.457292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.457311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.392 [2024-11-28 08:26:49.457497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.392 [2024-11-28 08:26:49.457514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.392 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.457669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.457686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.457959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.457977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.458230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.458248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.458473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.458490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.458602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.458619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.458788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.458823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.459120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.459156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.459427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.459461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.459762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.459796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.459999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.460035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.460296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.460329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.460632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.460649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.460876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.460894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.461003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.461021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.461198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.461217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.461410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.461444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.461705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.461739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.462052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.462089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.462352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.462392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.462594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.462628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.462827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.462860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.463146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.463182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.463398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.463433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.463696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.463730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.463883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.393 [2024-11-28 08:26:49.463917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.393 qpair failed and we were unable to recover it. 00:28:07.393 [2024-11-28 08:26:49.464234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.464252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.464477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.464494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.464652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.464686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.464900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.464933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.465257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.465293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.465521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.465554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.465704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.465737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.466106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.466170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.466481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.466522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.466726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.466764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.467065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.467102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.467320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.467354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.467650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.467684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.467984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.468019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.468294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.468329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.468593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.468627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.468840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.468874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.469131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.469145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.469252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.469264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.469442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.469476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.469776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.469819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.470104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.470118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.470268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.470302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.470511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.470544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.470805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.470839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.470989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.471025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.471290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.471326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.471585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.471597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.471837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.394 [2024-11-28 08:26:49.471865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.394 qpair failed and we were unable to recover it. 00:28:07.394 [2024-11-28 08:26:49.472145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.472181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.472470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.472504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.472716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.472751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.473014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.473049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.473268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.473302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.473577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.473611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.473904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.473940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.474202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.474215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.474451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.474486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.474771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.474805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.475121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.475156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.475420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.475455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.475722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.475755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.476020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.476056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.476351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.476386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.476671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.476705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.476993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.477028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.477311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.477345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.477647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.477728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.478045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.478084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.478355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.478390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.478679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.478713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.478996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.479032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.479198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.479231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.479451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.479485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.479793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.479827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.480122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.480157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.480424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.480441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.480598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.395 [2024-11-28 08:26:49.480614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.395 qpair failed and we were unable to recover it. 00:28:07.395 [2024-11-28 08:26:49.480841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.480858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.481031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.481049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.481329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.481372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.481571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.481604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.481916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.481973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.482186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.482221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.482465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.482481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.482673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.482691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.482862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.482879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.483092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.483127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.483333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.483367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.483644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.483678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.483898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.483932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.484199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.484217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.484438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.484454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.484630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.484647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.484831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.484865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.485149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.485185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.485328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.485363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.485627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.485661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.485940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.485983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.486267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.486302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.486428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.486462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.486655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.486688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.486902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.486935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.487153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.487189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.487398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.487431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.487577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.487611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.487899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.396 [2024-11-28 08:26:49.487934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.396 qpair failed and we were unable to recover it. 00:28:07.396 [2024-11-28 08:26:49.488154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.488189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.488448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.488493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.488739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.488757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.489011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.489047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.489182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.489217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.489505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.489541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.489774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.489808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.490008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.490043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.490308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.490325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.490547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.490565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.490851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.490884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.491108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.491143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.491471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.491504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.491788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.491830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.492136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.492154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.492341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.492357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.492508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.492525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.492777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.492810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.493043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.493078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.493387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.493420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.493704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.493739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.494026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.494064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.494250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.494268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.494521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.494556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.494698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.494732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.495018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.495054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.495340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.495374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.495657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.495691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.495829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.397 [2024-11-28 08:26:49.495864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.397 qpair failed and we were unable to recover it. 00:28:07.397 [2024-11-28 08:26:49.496176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.496211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.496504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.496539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.496814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.496848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.497080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.497115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.497373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.497391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.497617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.497652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.497841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.497875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.498113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.498149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.498271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.498305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.498576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.498610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.498894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.498928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.499145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.499163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.499329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.499362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.499576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.499611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.499813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.499846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.500133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.500172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.500276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.500292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.500468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.500501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.500770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.500804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.501075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.501120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.501321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.501338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.501596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.501631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.501919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.501963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.502174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.502209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.502402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.502442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.502645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.502679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.502885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.502919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.503164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.503181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.503428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.398 [2024-11-28 08:26:49.503462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.398 qpair failed and we were unable to recover it. 00:28:07.398 [2024-11-28 08:26:49.503654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.503689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.503899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.503933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.504204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.504221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.504390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.504424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.504734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.504768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.504997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.505032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.505313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.505331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.505436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.505453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.505619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.505654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.505963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.505999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.506208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.506241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.506516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.506548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.506760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.506794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.506990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.507025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.507238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.507282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.507509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.507525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.507625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.507642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.507891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.507924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.508126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.508160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.508445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.508479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.508741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.508773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.509079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.509114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.509409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.509445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.509718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.509751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.510043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.510079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.399 qpair failed and we were unable to recover it. 00:28:07.399 [2024-11-28 08:26:49.510265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.399 [2024-11-28 08:26:49.510282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.510510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.510545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.510759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.510794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.511013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.511049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.511339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.511377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.511590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.511625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.511887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.511921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.512141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.512175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.512451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.512486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.512777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.512812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.513078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.513099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.513326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.513343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.513599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.513633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.513864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.513898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.514200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.514235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.514426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.514444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.514615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.514632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.514809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.514850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.515071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.515107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.515435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.515469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.515696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.515730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.515943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.515988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.516191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.516208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.516396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.516430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.516708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.516743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.516970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.517005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.517215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.517232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.517414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.517447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.400 [2024-11-28 08:26:49.517682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.400 [2024-11-28 08:26:49.517715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.400 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.517976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.518011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.518244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.518265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.518426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.518460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.518747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.518781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.518921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.518984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.519264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.519298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.519500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.519543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.519780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.519797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.519907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.519926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.520101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.520136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.520338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.520373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.520599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.520632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.520891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.520924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.521126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.521160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.521433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.521466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.521687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.521704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.521928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.521945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.522220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.522237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.522410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.522428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.522604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.522621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.522851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.522885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.523198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.523237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.523487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.523504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.523704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.523720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.523898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.523916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.524116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.524133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.524285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.524303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.524493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.524526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.524729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.524763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.525052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.401 [2024-11-28 08:26:49.525087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.401 qpair failed and we were unable to recover it. 00:28:07.401 [2024-11-28 08:26:49.525250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.525285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.525516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.525551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.525743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.525760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.525991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.526008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.526241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.526275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.526504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.526538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.526775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.526810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.527110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.527146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.527457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.527491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.527773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.527807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.528011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.528047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.528308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.528342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.528534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.528551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.528829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.528846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.529069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.529087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.529339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.529377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.529586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.529620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.529906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.529939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.530227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.530267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.530543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.530586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.530813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.530830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.530996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.531013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.531124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.531142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.531307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.531324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.531568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.531584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.531832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.531849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.532091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.532108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.532281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.532298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.532550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.532584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.532874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.532908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.533191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.402 [2024-11-28 08:26:49.533227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.402 qpair failed and we were unable to recover it. 00:28:07.402 [2024-11-28 08:26:49.533515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.533550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.533763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.533797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.534087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.534124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.534401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.534419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.534664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.534681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.534848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.534865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.535040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.535058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.535253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.535287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.535548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.535581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.535890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.535924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.536141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.536175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.536460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.536478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.536679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.536712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.536904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.536937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.537237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.537271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.537536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.537570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.537744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.537761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.537929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.537954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.538204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.538221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.538400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.538434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.538727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.538760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.538971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.538988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.539170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.539204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.539487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.539522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.539729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.539764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.540053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.540089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.540355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.540389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.540599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.540643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.540866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.540882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.541065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.541083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.403 [2024-11-28 08:26:49.541401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.403 [2024-11-28 08:26:49.541435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.403 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.541727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.541761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.541987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.542022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.542299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.542333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.542595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.542628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.542894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.542927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.543150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.543167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.543444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.543477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.543734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.543769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.544053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.544088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.544234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.544268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.544487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.544522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.544712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.544746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.545013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.545049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.545329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.545351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.545604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.545622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.545794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.545810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.546106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.546142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.546338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.546355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.546598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.546631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.546831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.546864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.547157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.547192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.547411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.547445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.547723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.547739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.547855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.547872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.548071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.548106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.548348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.548381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.548593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.548626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.548912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.548946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.549257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.549291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.549501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.549535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.549715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.404 [2024-11-28 08:26:49.549732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.404 qpair failed and we were unable to recover it. 00:28:07.404 [2024-11-28 08:26:49.549937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.549981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.550271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.550305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.550574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.550591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.550856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.550873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.551026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.551045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.551328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.551348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.551533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.551550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.551716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.551733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.551900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.551934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.552179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.552214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.552412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.552446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.552754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.552788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.553048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.553083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.553374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.553409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.553689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.553723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.553973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.554009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.554321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.554354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.554687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.554720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.554932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.554977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.555266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.555299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.555587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.555624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.405 [2024-11-28 08:26:49.555903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.405 [2024-11-28 08:26:49.555937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.405 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.556163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.556198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.556508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.556542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.556736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.556770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.557057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.557093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.557376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.557418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.557527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.557544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.557722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.557739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.557990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.558025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.558306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.558339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.558572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.558589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.558844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.558860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.559092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.559111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.559359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.559396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.559689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.559723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.559940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.559985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.560201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.560236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.560427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.560460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.560747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.560780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.561063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.561098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.561406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.561441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.561643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.561677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.561889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.561924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.562175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.562210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.562471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.562516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.562758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.562792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.563070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.563106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.563355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.563388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.563695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.563729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.564015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.406 [2024-11-28 08:26:49.564050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.406 qpair failed and we were unable to recover it. 00:28:07.406 [2024-11-28 08:26:49.564331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.564347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.564510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.564527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.564751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.564769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.564957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.564992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.565288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.565323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.565579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.565595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.565795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.565812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.566040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.566058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.566258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.566274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.566449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.566466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.566698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.566731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.567039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.567075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.567334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.567351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.567448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.567465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.567644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.567677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.567823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.567856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.568004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.568039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.568328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.568362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.568566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.568600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.568877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.568911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.569232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.569267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.569480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.569514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.569824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.569857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.570051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.570086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.570377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.570395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.570646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.570685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.570899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.570932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.571164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.571199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.407 qpair failed and we were unable to recover it. 00:28:07.407 [2024-11-28 08:26:49.571494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.407 [2024-11-28 08:26:49.571528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.571722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.571739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.571940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.571963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.572214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.572248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.572386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.572420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.572615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.572650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.572883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.572923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.573225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.573260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.573540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.573575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.573784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.573819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.574103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.574138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.574288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.574322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.574612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.574647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.574931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.574974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.575169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.575203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.575485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.575520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.575797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.575814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.576062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.576079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.576246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.576264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.576517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.576550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.576753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.576770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.576957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.576993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.577200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.577234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.577378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.577412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.577716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.577733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.577960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.577977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.578213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.578230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.578481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.578498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.578742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.408 [2024-11-28 08:26:49.578759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.408 qpair failed and we were unable to recover it. 00:28:07.408 [2024-11-28 08:26:49.578913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.578929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.579111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.579129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.579248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.579265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.579438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.579455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.579657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.579690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.579936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.580000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.580264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.580296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.580552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.580586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.580893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.580926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.581224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.581258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.581464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.581503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.581669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.581687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.581914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.581975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.582178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.582212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.582443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.582460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.582657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.582689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.582910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.582944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.583192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.583231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.583447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.583479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.583761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.583794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.584089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.584124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.584322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.584339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.584566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.584599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.584794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.584827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.584956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.584991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.585276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.585310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.585593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.585626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.585915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.585960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.586235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.586269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.409 [2024-11-28 08:26:49.586487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.409 [2024-11-28 08:26:49.586520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.409 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.586708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.586725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.586906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.586940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.587153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.587186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.587472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.587506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.587803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.587836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.588109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.588144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.588438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.588476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.588746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.588780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.588971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.589006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.589279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.589312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.589602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.589636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.589842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.589875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.590075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.590110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.590371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.590405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.590722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.590756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.590894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.590929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.591220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.591254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.591446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.591463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.591691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.591724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.591940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.591989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.592288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.592322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.592600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.592618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.592790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.592808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.593054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.593071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.593245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.593278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.593539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.593573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.593793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.593810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.594039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.594080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.410 [2024-11-28 08:26:49.594272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.410 [2024-11-28 08:26:49.594307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.410 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.594581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.594615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.594869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.594903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.595194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.595229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.595451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.595469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.595663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.595697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.595899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.595933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.596262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.596297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.596524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.596558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.596792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.596826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.596972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.597008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.597212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.597246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.597530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.597563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.597757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.597774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.597964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.597999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.598235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.598269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.598460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.598495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.598774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.598808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.599071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.599107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.599313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.599346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.599630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.599663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.599961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.599996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.600290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.600323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.600592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.600627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.600916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-11-28 08:26:49.600934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.411 qpair failed and we were unable to recover it. 00:28:07.411 [2024-11-28 08:26:49.601214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.601231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.601438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.601456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.601678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.601695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.601955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.601972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.602242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.602275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.602582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.602617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.602821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.602854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.603137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.603173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.603383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.603418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.603676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.603711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.603905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.603940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.604246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.604290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.604543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.604580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.604781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.604815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.605025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.605066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.605328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.605363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.605573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.605607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.605883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.605900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.606140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.606158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.606315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.606333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.606545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.606578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.606728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.606762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.606962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.606998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.607140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.607174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.607385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.607429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.607599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.607616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.607801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.607819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.608069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.608105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.608304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.608339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.608649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.608666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.608887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.608904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.412 [2024-11-28 08:26:49.609139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.412 [2024-11-28 08:26:49.609157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.412 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.609313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.609330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.609558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.609591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.609913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.609957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.610243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.610278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.610588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.610622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.610931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.610974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.611250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.611285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.611576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.611623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.611881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.611898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.612173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.612215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.612341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.612357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.612577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.612591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.612813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.612847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.613051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.613087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.613402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.613436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.613757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.613791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.614001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.614014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.614162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.614175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.614275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.614288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.614520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.614553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.614702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.614736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.615049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.615083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.615363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.615408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.615604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.615638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.615880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.615914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.413 [2024-11-28 08:26:49.616208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.413 [2024-11-28 08:26:49.616244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.413 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.616522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.616557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.616839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.616852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.617031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.617045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.617124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.617137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.617303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.617316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.617553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.617565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.617732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.617745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.617981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.617994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.618164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.618177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.618391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.618404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.618621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.618634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.618721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.618734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.618884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.618897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.619047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.619060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.619319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.619332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.619573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.619585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.705 [2024-11-28 08:26:49.619827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.705 [2024-11-28 08:26:49.619839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.705 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.619995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.620008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.620257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.620270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.620456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.620469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.620712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.620724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.620890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.620902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.621122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.621136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.621396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.621410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.621598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.621611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.621812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.621845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.622059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.622117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.622410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.622445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.622570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.622604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.622920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.622962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.623240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.623275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.623507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.623520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.623771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.623805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.623996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.624032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.624305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.624339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.624610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.624644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.624840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.624855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.625071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.625085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.625283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.625317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.625506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.625540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.625752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.625789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.625896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.625909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.626134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.626148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.626318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.626346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.626671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.626704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.626987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.627023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.627311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.627346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.627624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.627658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.627881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.627915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.628248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.628288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.628516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.628534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.628785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.628818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.629076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.629111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.629418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.706 [2024-11-28 08:26:49.629452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.706 qpair failed and we were unable to recover it. 00:28:07.706 [2024-11-28 08:26:49.629706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.629740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.630009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.630044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.630336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.630370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.630674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.630707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.631020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.631055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.631215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.631250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.631506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.631539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.631743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.631778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.632054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.632097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.632290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.632307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.632528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.632545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.632658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.632690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.632899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.632933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.633149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.633183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.633414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.633447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.633655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.633688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.633892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.633909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.633996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.634013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.634170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.634187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.634300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.634333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.634556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.634588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.634849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.634882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.635108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.635149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.635386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.635419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.635721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.635758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.636029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.636065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.636357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.636390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.636656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.636691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.636987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.637022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.637229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.637263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.637487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.637504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.637671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.637688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.637871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.637903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.638171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.638206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.638504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.638537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.638736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.638769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.639061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.639097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.639375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.707 [2024-11-28 08:26:49.639419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.707 qpair failed and we were unable to recover it. 00:28:07.707 [2024-11-28 08:26:49.639687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.639723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.640016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.640051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.640329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.640362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.640652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.640686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.640963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.640981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.641176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.641193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.641417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.641450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.641597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.641630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.641851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.641883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.642169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.642203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.642498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.642531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.642818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.642900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.643260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.643340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.643743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.643825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.644116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.644157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.644453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.644487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.644691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.644724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.645008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.645026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.645245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.645262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.645516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.645550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.645763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.645797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.646049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.646085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.646305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.646339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.646606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.646650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.646843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.646860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.647139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.647157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.647269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.647286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.647474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.647506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.647784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.647818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.648061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.648097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.648355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.648389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.648698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.648732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.648995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.649029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.649237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.649272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.649464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.649481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.649667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.649700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.649969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.650003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.650219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.650252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.708 qpair failed and we were unable to recover it. 00:28:07.708 [2024-11-28 08:26:49.650547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.708 [2024-11-28 08:26:49.650587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.650891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.650924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.651215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.651249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.651532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.651565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.651779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.651813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.652076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.652112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.652346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.652380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.652700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.652734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.652959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.652976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.653219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.653236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.653482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.653500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.653675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.653710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.653970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.654006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.654220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.654254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.654538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.654572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.654872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.654914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.655183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.655263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.655576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.655614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.655930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.655978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.656176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.656211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.656453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.656487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.656695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.656730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.657016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.657052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.657255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.657289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.657581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.657615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.657918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.657960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.658211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.658244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.658525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.658568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.658799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.658834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.659068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.659081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.659237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.659250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.659437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.659470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.659708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.659742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.659964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.660001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.660261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.660296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.660491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.660525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.660727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.660762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.661024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.661058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.709 qpair failed and we were unable to recover it. 00:28:07.709 [2024-11-28 08:26:49.661351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.709 [2024-11-28 08:26:49.661385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.661692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.661727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.661870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.661904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.662203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.662239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.662511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.662545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.662756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.662790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.663101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.663136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.663416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.663451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.663659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.663694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.663913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.663958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.664220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.664255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.664540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.664574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.664899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.664934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.665237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.665271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.665538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.665572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.665790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.665825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.666035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.666071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.666268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.666281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.666458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.666492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.666775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.666808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.667100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.667137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.667335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.667369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.667656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.667690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.667885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.667898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.667997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.668010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.668231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.668244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.668458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.668471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.668714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.668728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.668904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.668938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.669137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.669177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.669437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.669471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.710 [2024-11-28 08:26:49.669742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.710 [2024-11-28 08:26:49.669754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.710 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.669979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.670016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.670223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.670258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.670547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.670581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.670796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.670831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.671000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.671013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.671105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.671117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.671334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.671346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.671490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.671503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.671665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.671699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.671910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.671944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.672176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.672212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.672502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.672537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.672819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.672853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.673112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.673148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.673348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.673383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.673654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.673689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.673869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.673882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.674050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.674064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.674167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.674181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.674419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.674447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.674649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.674684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.674874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.674908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.675197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.675234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.675546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.675581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.675793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.675828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.676041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.676077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.676270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.676304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.676499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.676512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.676754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.676788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.676992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.677028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.677315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.677349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.677653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.677687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.677966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.678002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.678207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.678241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.678554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.678589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.678880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.678914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.679159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.679194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.711 [2024-11-28 08:26:49.679510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.711 [2024-11-28 08:26:49.679550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.711 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.679758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.679791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.680005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.680040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.680268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.680303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.680593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.680627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.680843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.680856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.681110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.681124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.681363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.681375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.681535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.681549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.681707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.681721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.681962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.681976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.682219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.682232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.682479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.682492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.682731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.682744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.682907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.682921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.683155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.683191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.683506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.683540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.683738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.683751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.684019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.684032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.684244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.684257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.684447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.684460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.684621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.684634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.684852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.684865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.685094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.685130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.685357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.685392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.685649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.685662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.685849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.685862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.686080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.686093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.686199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.686212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.686482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.686494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.686587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.686600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.686823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.686856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.687075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.687112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.687379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.687414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.687706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.687740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.688025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.688061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.688349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.688383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.688618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.688653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.712 [2024-11-28 08:26:49.688928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.712 [2024-11-28 08:26:49.688971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.712 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.689262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.689297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.689572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.689612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.689875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.689909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.690191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.690227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.690512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.690547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.690748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.690782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.690985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.691020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.691230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.691264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.691528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.691562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.691756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.691790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.692074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.692110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.692390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.692424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.692634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.692668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.692952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.692987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.693223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.693258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.693552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.693588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.693838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.693851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.694094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.694108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.694352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.694386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.694681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.694715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.695011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.695047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.695341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.695375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.695650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.695684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.695979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.695992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.696154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.696167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.696380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.696394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.696544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.696557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.696798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.696833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.697002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.697039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.697328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.697364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.697576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.697611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.697894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.697929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.698152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.698188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.698446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.698481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.698791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.698805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.698961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.698975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.699149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.699183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.713 [2024-11-28 08:26:49.699468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.713 [2024-11-28 08:26:49.699503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.713 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.699791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.699825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.700109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.700145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.700346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.700379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.700536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.700552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.700794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.700807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.701115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.701150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.701437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.701472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.701703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.701738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.702018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.702054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.702313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.702348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.702644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.702678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.702893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.702906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.703056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.703093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.703298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.703331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.703561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.703608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.703819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.703832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.704102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.704137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.704457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.704492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.704781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.704816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.705099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.705134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.705424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.705458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.705670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.705705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.705899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.705932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.706258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.706294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.706487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.706522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.706806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.706841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.707092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.707106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.707275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.707289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.707549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.707583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.707806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.707841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.708204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.708285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.708507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.708546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.708827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.714 [2024-11-28 08:26:49.708862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.714 qpair failed and we were unable to recover it. 00:28:07.714 [2024-11-28 08:26:49.709075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.709093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.709288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.709323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.709559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.709593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.709755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.709789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.710028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.710047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.710292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.710309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.710535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.710552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.710704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.710721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.710975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.711011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.711345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.711379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.711666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.711700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.712011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.712048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.712272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.712306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.712509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.712543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.712796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.712813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.713045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.713080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.713346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.713379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.713699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.713733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.713881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.713914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.714118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.714136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.714384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.714418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.714699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.714733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.714999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.715034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.715314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.715347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.715636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.715677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.715995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.716031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.716314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.716348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.716634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.716668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.716896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.716930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.717052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.717071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.717178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.717194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.717401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.717418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.717640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.717656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.717879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.717895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.718066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.718085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.718269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.718304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.718566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.718599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.718899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.718933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.715 qpair failed and we were unable to recover it. 00:28:07.715 [2024-11-28 08:26:49.719215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.715 [2024-11-28 08:26:49.719250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.719538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.719573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.719807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.719840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.720051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.720069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.720267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.720285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.720479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.720496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.720744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.720777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.721040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.721077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.721378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.721412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.721554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.721588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.721803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.721838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.722097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.722115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.722284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.722301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.722463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.722497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.722770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.722787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.722974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.722992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.723242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.723259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.723504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.723521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.723756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.723773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.724020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.724038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.724327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.724361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.724665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.724699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.724890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.724924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.725240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.725274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.725499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.725532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.725747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.725782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.726086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.726118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.726295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.726313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.726560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.726577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.726818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.726852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.726975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.727011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.727299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.727334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.727606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.727639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.727852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.727886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.728160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.728178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.728371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.728404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.728610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.728645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.728885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.728919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.729124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.716 [2024-11-28 08:26:49.729160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.716 qpair failed and we were unable to recover it. 00:28:07.716 [2024-11-28 08:26:49.729423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.729456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.729739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.729773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.730059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.730077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.730240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.730257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.730485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.730519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.730734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.730751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.730908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.730925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.731159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.731178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.731440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.731456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.731708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.731725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.731957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.731974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.732224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.732242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.732358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.732375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.732487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.732521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.732713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.732747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.732971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.733012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.733239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.733274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.733580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.733614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.733900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.733935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.734173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.734208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.734489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.734522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.734782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.734816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.735100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.735136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.735340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.735374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.735643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.735677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.735957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.735992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.736202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.736235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.736476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.736510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.736702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.736735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.736880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.736914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.737324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.737406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.737719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.737757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.738032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.738070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.738278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.738311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.738435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.738469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.738753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.738788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.739096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.739130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.739331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.739366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.717 [2024-11-28 08:26:49.739576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.717 [2024-11-28 08:26:49.739610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.717 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.739891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.739925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.740144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.740178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.740461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.740494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.740774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.740818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.741098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.741134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.741410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.741444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.741741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.741775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.742030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.742065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.742376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.742411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.742667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.742701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.742986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.743022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.743323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.743357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.743627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.743661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.743929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.743951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.744208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.744225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.744363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.744397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.744663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.744698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.744932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.744973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.745262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.745297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.745534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.745569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.745766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.745801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.746104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.746123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.746299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.746317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.746546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.746579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.746811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.746846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.747041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.747058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.747258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.747293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.747496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.747530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.747813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.747847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.748060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.748095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.748388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.748471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.748733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.748770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.749000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.749019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.749202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.749220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.749505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.749540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.749765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.749783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.749940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.749965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.718 [2024-11-28 08:26:49.750075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.718 [2024-11-28 08:26:49.750092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.718 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.750283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.750317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.750529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.750564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.750838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.750873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.751138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.751175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.751397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.751432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.751718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.751753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.752072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.752091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.752349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.752367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.752538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.752557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.752751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.752769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.752889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.752907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.753162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.753180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.753416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.753433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.753658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.753676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.753973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.754009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.754234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.754269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.754538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.754584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.754810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.754827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.754986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.755004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.755246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.755288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.755576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.755610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.755882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.755900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.756053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.756071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.756319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.756353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.756566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.756600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.756862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.756896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.757133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.757169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.757378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.757413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.757628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.757662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.757875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.757909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.758078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.758114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.758309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.758344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.758628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.758662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.758898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.719 [2024-11-28 08:26:49.758933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.719 qpair failed and we were unable to recover it. 00:28:07.719 [2024-11-28 08:26:49.759172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.759206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.759414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.759448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.759665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.759699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.759850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.759884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.760083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.760101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.760350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.760385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.760706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.760740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.760941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.760988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.761278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.761312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.761579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.761612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.761760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.761794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.762104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.762140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.762411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.762432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.762735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.762768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.763055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.763091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.763386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.763421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.763610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.763644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.763847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.763881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.764145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.764163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.764404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.764421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.764582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.764600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.764856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.764873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.765126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.765164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.765432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.765466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.765761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.765795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.766072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.766108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.766407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.766441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.766710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.766743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.767017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.767060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.767227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.767244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.767336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.767353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.767589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.767623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.767933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.767978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.768263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.768296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.768577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.768611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.768827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.768861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.769103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.769139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.769457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.769474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.720 qpair failed and we were unable to recover it. 00:28:07.720 [2024-11-28 08:26:49.769724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.720 [2024-11-28 08:26:49.769760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.770091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.770127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.770421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.770456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.770753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.770787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.771060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.771096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.771334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.771369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.771503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.771535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.771743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.771778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.772080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.772117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.772421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.772455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.772599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.772632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.772926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.772969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.773257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.773274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.773492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.773509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.773735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.773753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.773984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.774005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.774256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.774292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.774487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.774522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.774762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.774795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.775108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.775144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.775432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.775466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.775749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.775782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.776031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.776067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.776375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.776410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.776551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.776585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.776785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.776819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.777144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.777162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.777391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.777408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.777704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.777738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.778036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.778072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.778343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.778378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.778607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.778642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.778837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.778855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.779061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.779097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.779329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.779364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.779578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.779612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.779877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.779894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.780117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.780135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.780403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.721 [2024-11-28 08:26:49.780421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.721 qpair failed and we were unable to recover it. 00:28:07.721 [2024-11-28 08:26:49.780602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.780636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.780847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.780882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.781089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.781107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.781277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.781319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.781604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.781640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.781863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.781897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.782128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.782163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.782426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.782461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.782661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.782695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.782972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.783008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.783288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.783305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.783526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.783544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.783792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.783828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.784057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.784092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.784420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.784454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.784745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.784778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.785062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.785098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.785305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.785323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.785552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.785586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.785845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.785879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.786163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.786181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.786335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.786377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.786579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.786613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.786941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.787005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.787215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.787250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.787494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.787528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.787667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.787701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.787917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.787963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.788284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.788317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.788581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.788616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.788904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.788939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.789261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.789295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.789538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.789572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.789837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.789871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.790107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.790125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.790378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.790412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.790620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.790655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.790851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.790886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.722 [2024-11-28 08:26:49.791197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.722 [2024-11-28 08:26:49.791216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.722 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.791438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.791455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.791698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.791716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.791971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.792011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.792328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.792362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.792643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.792677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.792968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.793009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.793278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.793313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.793607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.793641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.793913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.793930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.794130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.794150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.794400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.794436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.794723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.794758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.794976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.794994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.795245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.795283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.795497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.795532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.795741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.795776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.796062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.796098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.796382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.796401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.796574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.796591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.796816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.796834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.796998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.797033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.797262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.797296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.797487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.797522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.797808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.797843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.798141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.798160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.798272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.798289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.798477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.798511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.798795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.798829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.798986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.799023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.799225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.799242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.799495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.799530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.799789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.799824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.800125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.800143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.723 qpair failed and we were unable to recover it. 00:28:07.723 [2024-11-28 08:26:49.800371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.723 [2024-11-28 08:26:49.800389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.800615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.800632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.800854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.800872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.801070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.801088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.801311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.801328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.801482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.801499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.801692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.801710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.801891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.801908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.802084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.802119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.802439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.802474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.802735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.802768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.803014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.803033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.803274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.803291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.803465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.803482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.803711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.803744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.804027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.804045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.804271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.804289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.804471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.804505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.804711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.804745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.805044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.805063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.805287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.805304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.805478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.805496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.805767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.805810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.806017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.806053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.806317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.806351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.806555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.806588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.806803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.806819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.807057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.807093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.807403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.807437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.807611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.807646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.807902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.807937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.808170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.808204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.808491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.808526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.808808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.808842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.809106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.809142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.809373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.809407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.809691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.809726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.810014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.810063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.724 [2024-11-28 08:26:49.810235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.724 [2024-11-28 08:26:49.810252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.724 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.810436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.810470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.810757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.810796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.811017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.811035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.811202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.811220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.811454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.811471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.811638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.811656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.811755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.811772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.811963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.811980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.812225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.812242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.812485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.812502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.812612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.812629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.812754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.812771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.812962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.812980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.813139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.813157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.813399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.813432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.813593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.813627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.813915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.813959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.814224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.814259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.814569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.814602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.814801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.814817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.815033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.815069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.815282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.815316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.815536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.815570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.815781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.815815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.816110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.816145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.816343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.816360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.816555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.816572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.816752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.816786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.816927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.816971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.817279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.817314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.817582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.817616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.817929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.817973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.818167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.818202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.818483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.818517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.818707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.818741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.818976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.819012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.819204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.819221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.819432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.819448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.725 qpair failed and we were unable to recover it. 00:28:07.725 [2024-11-28 08:26:49.819553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.725 [2024-11-28 08:26:49.819570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.819841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.819874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.820139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.820175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.820411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.820429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.820582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.820606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.820774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.820791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.820971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.821006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.821228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.821264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1519586 Killed "${NVMF_APP[@]}" "$@" 00:28:07.726 [2024-11-28 08:26:49.821528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.821563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.821782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.821817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.822059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.822094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.822309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.822328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.822485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.822503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:07.726 [2024-11-28 08:26:49.822754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.822773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.823001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.823020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:07.726 [2024-11-28 08:26:49.823274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.823294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:07.726 [2024-11-28 08:26:49.823426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.823445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.726 [2024-11-28 08:26:49.823710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.823729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.823911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.823928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.824203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.824222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.824447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.824465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.824650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.824668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.824835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.824853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.825102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.825120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.825277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.825294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.825458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.825476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.825739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.825757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.825915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.825931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.826151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.826169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.826352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.826369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.826475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.826493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.826742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.826759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.826981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.826999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.827252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.827271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.827392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.827409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.827506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.726 [2024-11-28 08:26:49.827524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.726 qpair failed and we were unable to recover it. 00:28:07.726 [2024-11-28 08:26:49.827767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.827784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.827963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.827981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.828084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.828100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.828275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.828293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.828462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.828479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.828740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.828757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.829000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.829021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.829193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.829210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.829459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.829476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.829666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.829682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.829849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.829866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.830096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.830128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.830301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.830319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.830549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.830566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.830736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.830753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1520311 00:28:07.727 [2024-11-28 08:26:49.830912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.830933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.831114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1520311 00:28:07.727 [2024-11-28 08:26:49.831133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:07.727 [2024-11-28 08:26:49.831286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.831304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.831412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.831432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1520311 ']' 00:28:07.727 [2024-11-28 08:26:49.831540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.831561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.831713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.831732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.727 [2024-11-28 08:26:49.831890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.831909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:07.727 [2024-11-28 08:26:49.832137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.832156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.832355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.832373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.727 [2024-11-28 08:26:49.832562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.832580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:07.727 [2024-11-28 08:26:49.832775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.832794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 08:26:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.727 [2024-11-28 08:26:49.833047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.833066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.833190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.833208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.833386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.833403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.833576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.833593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.833833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.833850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.834101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.834120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.834301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.834318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.834480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.727 [2024-11-28 08:26:49.834497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.727 qpair failed and we were unable to recover it. 00:28:07.727 [2024-11-28 08:26:49.834764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.834781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.835005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.835023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.835249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.835267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.835368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.835385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.835493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.835510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.835665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.835683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.835838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.835855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.836122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.836141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.836310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.836330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.836491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.836509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.836755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.836772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.837017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.837035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.837215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.837233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.837317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.837335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.837505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.837522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.837693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.837710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.837877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.837893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.838152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.838170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.838301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.838319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.838500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.838518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.838797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.838814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.839064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.839082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.839269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.839286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.839456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.839473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.839742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.839759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.839938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.839964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.840086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.840103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.840216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.840234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.840424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.840441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.840664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.840681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.840940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.840965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.841162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.841181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.841282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.841300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.841476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.728 [2024-11-28 08:26:49.841494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.728 qpair failed and we were unable to recover it. 00:28:07.728 [2024-11-28 08:26:49.841701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.841719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.841990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.842011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.842194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.842211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.842365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.842382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.842628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.842645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.842888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.842905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.843103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.843121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.843286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.843304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.843385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.843403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.843522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.843540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.843739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.843758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.843923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.843940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.844207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.844225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.844398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.844415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.844574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.844592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.844881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.844929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.845240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.845281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.845447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.845462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.845685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.845699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.845963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.845977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.846148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.846161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.846323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.846335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.846520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.846533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.846727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.846742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.846924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.846937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.847041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.847054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.847235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.847249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.847412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.847425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.847509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.847526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.847724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.847738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.847887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.847900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.848070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.848084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.848226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.848239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.848404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.848418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.848631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.848644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.848739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.848751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.848894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.848909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.849128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-11-28 08:26:49.849144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.729 qpair failed and we were unable to recover it. 00:28:07.729 [2024-11-28 08:26:49.849294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.849307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.849518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.849531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.849708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.849721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.849796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.849812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.850076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.850092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.850202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.850216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.850364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.850378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.850617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.850632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.850824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.850838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.851072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.851086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.851310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.851324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.851433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.851447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.851617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.851631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.851854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.851868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.852018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.852033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.852195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.852208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.852367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.852381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.852540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.852553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.852815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.852829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.853043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.853058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.853279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.853293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.853532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.853545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.853700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.853713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.853949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.853963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.854199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.854212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.854379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.854393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.854535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.854548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.854788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.854801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.854892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.854904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.855146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.855160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.855247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.855263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.855373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.855387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.855555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.855568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.855752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.855766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.855918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.855931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.856105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.856119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.856335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.856348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.856448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-11-28 08:26:49.856461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.730 qpair failed and we were unable to recover it. 00:28:07.730 [2024-11-28 08:26:49.856701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.856715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.856880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.856893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.857150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.857164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.857408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.857421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.857661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.857674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.857884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.857897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.858086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.858100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.858261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.858274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.858477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.858490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.858578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.858591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.858824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.858837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.859070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.859084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.859241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.859254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.859439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.859453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.859667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.859680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.859859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.859872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.860129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.860142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.860316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.860330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.860546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.860558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.860797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.860841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.861118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.861140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.861390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.861408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.861606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.861622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.861893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.861910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.862076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.862093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.862273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.862290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.862458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.862476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.862649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.862666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.862853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.862870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.863075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.863094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.863327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.863344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.863550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.863567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.863749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.863767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.864005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.864024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.864189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.864206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.864383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.864400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.864667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.864684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.864915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.864932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.731 qpair failed and we were unable to recover it. 00:28:07.731 [2024-11-28 08:26:49.865191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-11-28 08:26:49.865209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.865458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.865475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.865656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.865673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.865856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.865873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.866083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.866101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.866207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.866224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.866491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.866509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.866688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.866704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.866969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.866989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.867163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.867180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.867409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.867426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.867574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.867590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.867807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.867824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.867999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.868016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.868233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.868249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.868333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.868350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.868519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.868535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.868777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.868794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.868946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.868968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.869134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.869152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.869379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.869396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.869500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.869517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.869812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.869828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.870097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.870114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.870282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.870299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.870582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.870599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.870788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.870805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.870962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.870980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.871220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.871236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.871423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.871439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.871661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.871677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.871957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.871975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.872233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.872250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.872475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.872491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.872652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.872669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.872814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.732 [2024-11-28 08:26:49.872831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-11-28 08:26:49.873012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.873030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.873225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.873242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.873399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.873415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.873647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.873663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.873845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.873862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.874130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.874147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.874341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.874357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.874468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.874485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.874631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.874648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.874835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.874852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.874967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.874984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.875200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.875217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.875405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.875421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.875618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.875642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.875868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.875886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.875977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.875994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.876208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.876226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.876396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.876412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.876572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.876589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.876807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.876823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.877035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.877054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.877243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.877260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.877459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.877475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.877691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.877708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.877899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.877915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.878001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.878017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.878204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.878220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.878440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.878456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.878559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.878576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.878833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.878850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.878936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.878959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.879132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.879149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.879307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.879324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.879597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.879614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.879799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.879815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.879937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.879958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-11-28 08:26:49.880188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.733 [2024-11-28 08:26:49.880206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.880375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.880392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.880609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.880627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.880791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.880810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.881072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.881089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.881275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.881292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.881453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.881469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.881728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.881744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.881757] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:28:07.734 [2024-11-28 08:26:49.881801] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.734 [2024-11-28 08:26:49.881904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.881919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.882110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.882124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.882309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.882322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.882480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.882494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.882712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.882728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.882969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.882986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.883138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.883155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.883325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.883341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.883592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.883609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.883775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.883791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.884030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.884047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.884229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.884245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.884463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.884480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.884764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.884781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.884955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.884973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.885148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.885164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.885335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.885352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.885442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.885459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.885631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.885648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.885831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.885850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.885981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.885999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.886243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.886263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.886457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.886474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.886636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.886653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.886884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.886901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.887153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.887170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.887413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.887430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.887662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.887679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.887863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.734 [2024-11-28 08:26:49.887879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.734 qpair failed and we were unable to recover it. 00:28:07.734 [2024-11-28 08:26:49.888029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.888046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.888277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.888294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.888443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.888460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.888656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.888673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.888901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.888918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.889102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.889119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.889294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.889311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.889431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.889448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.889555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.889571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.889735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.889752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.889934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.889958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.890125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.890141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.890311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.890328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.890509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.890525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.890671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.890687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.890790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.890806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.890960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.890978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.891215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.891231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.891374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.891390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.891489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.891506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.891603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.891619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.891711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.891727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.891892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.891909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.892008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.892025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.892244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.892260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.892460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.892476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.892728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.892744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.892901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.892917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.893156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.893173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.893338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.893354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.893638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.893655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.893895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.893911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.894056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.894076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.894242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.894259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.894430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.894445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.894717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.894733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.894917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.894934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.895040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.735 [2024-11-28 08:26:49.895056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.735 qpair failed and we were unable to recover it. 00:28:07.735 [2024-11-28 08:26:49.895165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.895181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.895345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.895362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.895512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.895528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.895637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.895654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.895753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.895770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.896005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.896022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.896185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.896201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.896308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.896325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.896475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.896492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.896706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.896722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.896896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.896913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.897009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.897025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.897101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.897118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.897207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.897224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.897399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.897416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.897589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.897606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.897699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.897715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.897817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.897834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.897924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.897940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.898052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.898068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.898228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.898244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.898419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.898441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.898600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.898618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.898711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.898728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.898880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.898896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.898988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.899005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.899154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.899171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.899271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.899288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.899452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.899469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.899558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.899574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.899720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.899736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.899815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.899832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.899983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.899999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.900092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.900109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.900203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.900225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.900325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.900342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.900491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.900508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.900676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.736 [2024-11-28 08:26:49.900693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.736 qpair failed and we were unable to recover it. 00:28:07.736 [2024-11-28 08:26:49.900973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.900991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.901169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.901186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.901286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.901302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.901416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.901433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.901510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.901527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.901633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.901650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.901805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.901822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.901985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.902002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.902150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.902168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.902315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.902331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.902522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.902539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.902775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.902793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.902957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.902974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.903079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.903096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.903196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.903212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.903303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.903319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.903474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.903490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.903644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.903661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.903840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.903857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.904029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.904047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.904216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.904232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.904384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.904400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.904501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.904518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.904678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.904700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.904865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.904882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.904983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.904996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.905087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.905099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.905313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.905326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.905414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.905426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.905505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.905517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.905667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.905679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.905755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.905768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.905973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.905986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.906129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.737 [2024-11-28 08:26:49.906141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.737 qpair failed and we were unable to recover it. 00:28:07.737 [2024-11-28 08:26:49.906279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.906293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.906379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.906393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.906525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.906541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.906611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.906623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.906728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.906741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.906811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.906822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.906971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.906985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.907046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.907058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.907214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.907227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.907302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.907314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.907392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.907404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.907489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.907502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.907583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.907596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.907736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.907748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.907909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.907921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.908078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.908091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.908458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.908470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.908672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.908684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.908842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.908854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.908921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.908933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.909031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.909045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.909184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.909196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.909268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.909280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.909382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.909394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.909467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.909479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.909622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.909635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.909773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.909786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.909919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.909933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.910086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.910099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.910186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.910199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.910355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.910368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.910546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.910559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.910718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.910731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.910821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.910833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.910974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.910987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.911135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.911147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.911301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.911314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.911385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.738 [2024-11-28 08:26:49.911398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.738 qpair failed and we were unable to recover it. 00:28:07.738 [2024-11-28 08:26:49.911472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.911485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.911618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.911630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.911717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.911730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.911889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.911902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.912044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.912059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.912153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.912166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.912303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.912316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.912385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.912397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.912482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.912494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.912632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.912647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.912787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.912799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.912866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.912879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.912968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.912982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.913084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.913097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.913173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.913186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.913259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.913272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.913358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.913370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.913512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.913525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.913672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.913684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.913839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.913852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.914010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.914023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.914182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.914195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.914331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.914343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.914507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.914520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.914681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.914694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.914920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.914933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.915085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.915098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.915259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.915272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.915363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.915376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.915472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.915485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.915570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.915583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.915829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.915843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.915916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.915929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.916020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.916033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.916175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.916187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.916266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.916278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.916365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.916377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.916449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.739 [2024-11-28 08:26:49.916462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.739 qpair failed and we were unable to recover it. 00:28:07.739 [2024-11-28 08:26:49.916534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.916547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.916644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.916656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.916748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.916760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.916997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.917010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.917162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.917175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.917323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.917336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.917416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.917434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.917575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.917588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.917695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.917708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.917792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.917804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.917939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.917955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.918029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.918041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.918176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.918189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.918281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.918293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.918376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.918389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.918469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.918482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.918619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.918631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.918790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.918802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.918956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.918969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.919113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.919125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.919223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.919235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.919317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.919330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.919470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.919482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.919581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.919594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.919744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.919756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.919862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.919875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.920046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.920060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.920200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.920212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.920287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.920300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.920441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.920454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.920599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.920612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.920692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.920711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.920774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.920787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.920856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.920869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.921101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.921115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.921198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.921211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.921365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.921377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.921531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.740 [2024-11-28 08:26:49.921543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.740 qpair failed and we were unable to recover it. 00:28:07.740 [2024-11-28 08:26:49.921683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.921695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.921850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.921863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.922072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.922085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.922181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.922194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.922333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.922345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.922436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.922449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.922593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.922606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.922753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.922766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.922902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.922914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.923006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.923019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.923170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.923183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.923262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.923275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.923355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.923368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.923443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.923456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.923592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.923605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.923691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.923704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.923786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.923799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.923871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.923884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.924023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.924036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.924116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.924129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.924196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.924208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.924286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.924313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.924561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.924573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.924639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.924651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.924750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.924762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.924898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.924910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.925061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.925074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.925213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.925225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.925299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.925311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.925385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.925397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.925466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.925478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.925559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.925571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.925628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.925640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.925715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.925727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.925867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.925879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.925953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.925967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.926114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.926127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.741 [2024-11-28 08:26:49.926299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.741 [2024-11-28 08:26:49.926312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.741 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.926466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.926479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.926574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.926586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.926657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.926668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.926822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.926834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.927018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.927031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.927107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.927120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.927322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.927335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.927448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.927460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.927529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.927542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.927652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.927665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.927811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.927823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.927906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.927918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.928001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.928014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.928076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.928088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.928269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.928281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.928350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.928363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.928509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.928521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.928657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.928670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.928759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.928771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.928980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.928992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.929145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.929157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.929248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.929261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.929345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.929358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.929492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.929505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.929643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.929655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.929815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.929827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.929898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.929910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.930117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.930130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.930274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.930287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.930365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.930377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.930474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.930485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.930637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.930650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.930796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.742 [2024-11-28 08:26:49.930808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.742 qpair failed and we were unable to recover it. 00:28:07.742 [2024-11-28 08:26:49.930892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.930904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.931058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.931071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.931153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.931165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.931213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.931225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.931372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.931385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.931549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.931561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.931649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.931661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.931799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.931811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.931971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.931983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.932061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.932073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.932141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.932153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.932326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.932337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.932436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.932447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.932584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.932597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.932729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.932741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.932875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.932887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.932986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.932998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.933130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.933142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.933387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.933400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.933468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.933480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.933652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.933663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.933864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.933876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.934014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.934027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.934105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.934118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.934196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.934208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.934285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.934297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.934436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.934448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.934604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.934617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.934776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.934789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.934866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.934878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.934958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.934970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.935112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.935125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.935262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.935275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.935434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.935457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.935694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.935705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.935800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.935813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.935902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.935914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.936118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.743 [2024-11-28 08:26:49.936131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.743 qpair failed and we were unable to recover it. 00:28:07.743 [2024-11-28 08:26:49.936199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.936211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.936414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.936426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.936595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.936607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.936758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.936770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.936836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.936848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.937120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.937133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.937292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.937306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.937470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.937483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.937567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.937579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.937717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.937728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.937790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.937802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.937936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.937953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.938049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.938062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.938230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.938243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.938377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.938389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.938471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.938484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.938628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.938641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.938801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.938814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.938956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.938968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.939043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.939055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:07.744 [2024-11-28 08:26:49.939141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.744 [2024-11-28 08:26:49.939153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:07.744 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.939336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.939349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.939441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.939453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.939629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.939641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.939788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.939801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.940017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.940030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.940177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.940189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.940270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.940283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.940370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.940382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.940466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.940478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.940681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.940693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.940859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.940871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.940959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.940971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.941047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.941059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.941221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.941233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.941320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.044 [2024-11-28 08:26:49.941331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.044 qpair failed and we were unable to recover it. 00:28:08.044 [2024-11-28 08:26:49.941482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.941494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.941628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.941640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.941775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.941787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.941926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.941938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.942026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.942039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.942121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.942133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.942275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.942288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.942360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.942372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.942437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.942449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.942527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.942539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.942607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.942621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.942685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.942697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.942840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.942852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.942926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.942938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.943105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.943117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.943256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.943268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.943411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.943423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.943525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.943537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.943616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.943628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.943767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.943779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.943858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.943870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.944016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.944028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.944258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.944270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.944337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.944349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.944428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.944440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.944573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.944585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.944666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.944678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.944815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.944826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.944967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.944979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.945094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.945105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.945260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.945272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.945366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.945378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.945451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.945463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.945597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.945608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.945776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.945788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.945941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.945957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.946043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.946055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.045 [2024-11-28 08:26:49.946141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.045 [2024-11-28 08:26:49.946153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.045 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.946233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.946246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.946311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.946323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.946460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.946472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.946538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.946551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.946641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.946653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.946842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.946854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.946957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.946970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.947194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.947206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.947283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.947295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.947380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.947392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.947597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.947610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.947679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.947692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.947769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.947782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.947872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.947883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.947961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.947973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.948100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.948113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.948254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.948265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.948359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.948371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.948446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.948458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.948619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.948631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.948725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.948737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.948800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.948811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.948969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.948982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.949048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.949060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.949119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.949130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.949194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.949206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.949364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.949375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.949516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.949527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.949602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.949614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.949706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.949718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.949857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.949869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.949996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.950008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.950161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.950173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.950306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.950317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.950459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.950470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.950541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.950553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.950706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.950718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.046 [2024-11-28 08:26:49.950894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.046 [2024-11-28 08:26:49.950906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.046 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.951052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.951064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.951141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.951153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.951304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.951316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.951396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.951408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.951487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.951499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.951649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.951661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.951729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.951740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.951874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.951886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.951964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.951977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.952122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.952135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.952292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.952304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.952360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.952372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.952511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.952524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.952620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.952632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.952792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.952805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.953008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.953021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.953109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.953121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.953211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.953223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.953297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.953310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.953380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.953392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.953526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.953538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.953672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.953684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.953905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.953917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.954021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.954033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.954111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.954123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.954352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.954364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.954508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.954519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.954592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.954604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.954673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.954685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.954759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.954771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.954846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.954858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.954955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.954968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.955123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.955135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.955214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.955226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.955344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.955357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.955518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.955530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.955690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.955702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.047 qpair failed and we were unable to recover it. 00:28:08.047 [2024-11-28 08:26:49.955820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.047 [2024-11-28 08:26:49.955832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.955902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.955914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.956082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.956095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.956242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.956254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.956385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.956397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.956472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.956484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.956571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.956583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.956673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.956686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.956797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.956809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.956958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.956981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.957049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.957061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.957132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.957144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.957330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.957342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.957425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.957437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.957527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.957539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.957698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.957710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.957841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.957853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.957984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.957998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.958102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.958114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.958200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.958213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.958287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.958299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.958437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.958449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.958528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.958540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.958669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.958681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.958826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.958838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.958911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.958923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.959068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.959080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.959228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.959241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.959305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.959317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.959383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.959395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.959615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.959627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.959763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.959775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.959907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.959919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.960055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.960067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.960213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.960225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.960300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.960312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.960405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.960416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.960489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.960501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.048 qpair failed and we were unable to recover it. 00:28:08.048 [2024-11-28 08:26:49.960647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.048 [2024-11-28 08:26:49.960659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.960859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.960871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.960973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.960985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.961066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.961078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.961149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.961161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.961303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.961315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.961407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.961420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.961500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.961512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.961656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.961668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.961812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.961824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.961909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.961922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.962071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.962084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.962157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.962169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.962311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.962323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.962473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.962485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.962550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.962563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.962733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.962745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.962787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:08.049 [2024-11-28 08:26:49.962890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.962902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.963052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.963064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.963155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.963177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.963330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.963346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.963494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.963511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.963647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.963663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.963762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.963777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.963921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.963937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.964128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.964145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.964296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.964311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.964381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.964397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.964472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.964488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.964573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.964588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.964661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.964675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.964763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.964779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.049 qpair failed and we were unable to recover it. 00:28:08.049 [2024-11-28 08:26:49.964934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.049 [2024-11-28 08:26:49.964958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.965106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.965122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.965226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.965242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.965390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.965405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.965558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.965574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.965777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.965793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.965889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.965905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.965995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.966012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.966230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.966248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.966357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.966372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.966530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.966546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.966634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.966650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.966797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.966814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.966986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.967003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.967206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.967223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.967325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.967342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.967437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.967453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.967527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.967543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.967630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.967646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.967817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.967833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.967920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.967936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.968097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.968113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.968261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.968278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.968364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.968380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.968587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.968603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.968767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.968783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.968935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.968954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.969127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.969148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.969235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.969251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.969452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.969468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.969610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.969627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.969712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.969727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.969805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.969821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.969900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.969916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.970076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.970094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.970177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.970193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.970379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.050 [2024-11-28 08:26:49.970396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.050 qpair failed and we were unable to recover it. 00:28:08.050 [2024-11-28 08:26:49.970608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.970623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.970729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.970746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.970828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.970845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.970980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.971001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.971173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.971191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.971266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.971279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.971435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.971447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.971515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.971528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.971668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.971681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.971881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.971894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.972025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.972039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.972106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.972119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.972203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.972215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.972444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.972456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.972526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.972539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.972612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.972625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.972784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.972798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.972942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.972960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.973108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.973121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.973262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.973274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.973359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.973371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.973455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.973467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.973550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.973562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.973639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.973651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.973786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.973797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.973960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.973973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.974038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.974051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.974144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.974157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.974299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.974312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.974372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.974384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.974502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.974532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.974617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.974638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.974741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.974761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.974860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.974873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.974942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.974958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.975028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.975041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.975112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.975124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.975211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.051 [2024-11-28 08:26:49.975222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.051 qpair failed and we were unable to recover it. 00:28:08.051 [2024-11-28 08:26:49.975289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.975301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.975373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.975384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.975453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.975465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.975559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.975571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.975637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.975648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.975729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.975743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.975822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.975834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.975906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.975918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.976140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.976153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.976308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.976319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.976400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.976412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.976550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.976562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.976694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.976706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.976770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.976782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.976928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.976940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.977013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.977025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.977092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.977104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.977170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.977182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.977254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.977267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.977402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.977414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.977579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.977591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.977744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.977756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.977840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.977852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.978007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.978020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.978167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.978179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.978246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.978258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.978339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.978351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.978503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.978515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.978581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.978593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.978655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.978667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.978811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.978823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.978889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.978900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.979035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.979049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.979241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.979253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.979320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.979332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.979426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.979437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.979595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.979608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.979700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.052 [2024-11-28 08:26:49.979712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.052 qpair failed and we were unable to recover it. 00:28:08.052 [2024-11-28 08:26:49.979776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.979789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.979854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.979866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.980022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.980034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.980118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.980130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.980274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.980286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.980416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.980428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.980518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.980530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.980705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.980718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.980872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.980884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.981089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.981101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.981180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.981192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.981346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.981358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.981495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.981507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.981573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.981584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.981757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.981769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.981913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.981926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.982075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.982088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.982289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.982300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.982448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.982460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.982594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.982605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.982684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.982696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.982767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.982779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.982911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.982923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.983000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.983012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.983158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.983170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.983263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.983275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.983346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.983358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.983434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.983446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.983518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.983530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.983596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.983608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.983702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.983714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.983796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.983808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.983936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-11-28 08:26:49.983955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.053 qpair failed and we were unable to recover it. 00:28:08.053 [2024-11-28 08:26:49.984103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.984115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.984176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.984190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.984333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.984345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.984433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.984445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.984512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.984524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.984595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.984607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.984675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.984687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.984831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.984842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.985008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.985020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.985165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.985177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.985260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.985271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.985489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.985501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.985641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.985653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.985788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.985800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.985875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.985887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.986056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.986069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.986307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.986319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.986412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.986424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.986500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.986512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.986660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.986672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.986871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.986883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.987021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.987034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.987122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.987134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.987228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.987239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.987320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.987331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.987408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.987420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.987503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.987516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.987651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.987663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.987816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.987828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.987907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.987920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.988054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.988066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.988213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.988225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.988378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.988391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.988477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.988489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.988558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.988570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.988647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.988659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.988742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.988754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.988835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-11-28 08:26:49.988847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.054 qpair failed and we were unable to recover it. 00:28:08.054 [2024-11-28 08:26:49.988924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.988936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.989027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.989048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.989162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.989178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.989320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.989344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.989494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.989509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.989586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.989602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.989688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.989704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.989845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.989861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.990006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.990023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.990120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.990136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.990219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.990236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.990390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.990405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.990567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.990583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.990726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.990742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.990835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.990850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.990995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.991012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.991154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.991169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.991319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.991334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.991485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.991502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.991709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.991725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.991869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.991884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.992037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.992054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.992154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.992169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.992264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.992279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.992456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.992472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.992725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.992741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.992889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.992904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.993069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.993085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.993176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.993191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.993331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.993346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.993437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.993455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.993539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.993555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.993705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.993720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.993858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.993874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.993959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.993975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.994115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.994131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.994239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.994255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.994433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.994449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.055 qpair failed and we were unable to recover it. 00:28:08.055 [2024-11-28 08:26:49.994527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-11-28 08:26:49.994543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.994637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.994654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.994737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.994754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.994849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.994865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.995021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.995039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.995114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.995131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.995217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.995235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.995381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.995397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.995473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.995489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.995653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.995669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.995755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.995771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.995924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.995939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.996035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.996051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.996141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.996157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.996247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.996263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.996417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.996432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.996515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.996531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.996624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.996638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.996723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.996735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.996819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.996833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.996981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.996993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.997194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.997206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.997361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.997373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.997450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.997461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.997605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.997617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.997754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.997766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.997855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.997867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.998065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.998078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.998243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.998255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.998404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.998416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.998549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.998561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.998713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.998725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.998857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.998869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.999086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.999098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.999179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.999191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.999267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.999279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.999492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.999503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.999586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.999598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.056 qpair failed and we were unable to recover it. 00:28:08.056 [2024-11-28 08:26:49.999729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.056 [2024-11-28 08:26:49.999741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:49.999961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:49.999973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.000188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.000200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.000266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.000278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.000477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.000489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.000578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.000590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.000656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.000668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.000815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.000828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.000944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.000972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.001070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.001087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.001295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.001311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.001472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.001489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.001586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.001602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.001699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.001715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.001804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.001820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.001929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.001951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.002040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.002057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.002196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.002212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.002288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.002304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.002474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.002490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.002644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.002660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.002889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.002909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.002984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.003001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.003097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.003114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.003346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.003362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.003518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.003534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.003626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.003643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.003816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.003833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.003935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.004024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.004184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.004202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.004291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.004307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.004452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.004468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.004563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.004579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.004685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.004697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.004802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.004814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.004893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.004905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.005053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.005067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.005208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.005221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.057 qpair failed and we were unable to recover it. 00:28:08.057 [2024-11-28 08:26:50.005366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.057 [2024-11-28 08:26:50.005379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.005459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.005472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.005605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.005617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.005703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.005716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.005864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.005876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.006085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.006099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.006185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.006197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.006266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.006278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.006349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.006361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.006445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.006457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.006626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.006647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.006670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.058 [2024-11-28 08:26:50.006694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.058 [2024-11-28 08:26:50.006702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.058 [2024-11-28 08:26:50.006709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.058 [2024-11-28 08:26:50.006714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.058 [2024-11-28 08:26:50.006723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.006739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.006812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.006827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.006935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.006955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.007103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.007120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.007211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.007227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.007318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.007334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.007415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.007431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.007528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.007544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.007631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.007645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.007780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.007792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.007932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.007943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.008112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.008124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.008216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.008228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.008307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.008319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.008248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:08.058 [2024-11-28 08:26:50.008394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.008407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.008289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:08.058 [2024-11-28 08:26:50.008395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:08.058 [2024-11-28 08:26:50.008502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.008514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.008397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:08.058 [2024-11-28 08:26:50.008603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.008614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.008750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.008762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.008896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.008908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.058 [2024-11-28 08:26:50.009050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.058 [2024-11-28 08:26:50.009063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.058 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.009275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.009287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.009437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.009450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.009518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.009530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.009618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.009630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.009761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.009774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.009856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.009868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.010001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.010014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.010089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.010102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.010240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.010253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.010402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.010414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.010548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.010560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.010630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.010642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.010774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.010786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.010946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.010964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.011066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.011079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.011210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.011222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.011357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.011370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.011517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.011530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.011625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.011637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.011720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.011733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.011878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.011891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.012058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.012071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.012140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.012152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.012240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.012252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.012408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.012421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.012553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.012566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.012704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.012716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.012852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.012865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.013001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.013013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.013167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.013185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.013317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.013329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.013394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.013406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.013549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.013562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.013653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.013665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.013759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.013771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.013922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.013935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.014038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.014051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.059 qpair failed and we were unable to recover it. 00:28:08.059 [2024-11-28 08:26:50.014116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.059 [2024-11-28 08:26:50.014128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.014204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.014216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.014299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.014312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.014394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.014407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.014571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.014583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.014734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.014746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.014883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.014896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.015030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.015042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.015138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.015150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.015299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.015311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.015399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.015410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.015504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.015517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.015605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.015617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.015748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.015760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.015830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.015842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.016008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.016022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.016087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.016100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.016166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.016179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.016253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.016265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.016366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.016378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.016464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.016476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.016630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.016642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.016846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.016859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.016941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.016960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.017117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.017130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.017334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.017347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.017544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.017556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.017701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.017713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.017846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.017857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.017935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.017949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.018111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.018123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.018190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.018202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.018353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.018367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.018578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.018590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.018775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.018788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.018890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.018902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.019043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.019056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.019186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.019197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.060 [2024-11-28 08:26:50.019341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.060 [2024-11-28 08:26:50.019353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.060 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.019447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.019460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.019545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.019557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.019709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.019722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.019876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.019889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.019997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.020010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.020109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.020121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.020196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.020208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.020366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.020379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.020471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.020483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.020571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.020584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.020673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.020686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.020774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.020786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.020891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.020903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.020999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.021013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.021102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.021114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.021210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.021222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.021302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.021314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.021526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.021539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.021702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.021715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.021825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.021838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.021931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.021944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.022050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.022063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.022149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.022161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.022239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.022252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.022344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.022378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.022516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.022561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.022831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.022927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.023128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.023149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.023383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.023403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.023518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.023541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.023647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.023667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.023764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.023786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.023888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.023907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.024014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.024042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.024151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.024170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.024245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.024258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.024333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.061 [2024-11-28 08:26:50.024345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.061 qpair failed and we were unable to recover it. 00:28:08.061 [2024-11-28 08:26:50.024439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.024452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.024524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.024536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.024617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.024631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.024705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.024716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.024794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.024806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.024891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.024903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.024977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.024990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.025061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.025072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.025153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.025165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.025236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.025248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.025399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.025412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.025493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.025505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.025636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.025648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.025737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.025749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.025824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.025836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.025976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.025989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.026052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.026064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.026148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.026161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.026253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.026265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.026335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.026346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.026500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.026513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.026605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.026617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.026689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.026700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.026841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.026855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.026930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.026944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.027052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.027063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.027138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.027149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.027299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.027311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.027454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.027466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.027549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.027560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.027708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.027720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.027925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.027937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.028013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.028025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.062 qpair failed and we were unable to recover it. 00:28:08.062 [2024-11-28 08:26:50.028265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.062 [2024-11-28 08:26:50.028276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.028435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.028446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.028528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.028539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.028670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.028684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.028834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.028847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.029049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.029062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.029214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.029225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.029373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.029385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.029468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.029479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.029553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.029564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.029646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.029659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.029750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.029762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.029894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.029906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.029991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.030002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.030094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.030105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.030245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.030255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.030458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.030469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.030602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.030613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.030754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.030765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.030844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.030854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.031034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.031046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.031125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.031137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.031232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.031243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.031393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.031404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.031490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.031502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.031656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.031667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.031811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.031822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.031922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.031933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.032010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.032023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.032119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.032131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.032234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.032263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.032431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.032451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.032537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.032553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.032632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.032646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.032797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.032812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.032961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.032976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.033075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.033090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.063 qpair failed and we were unable to recover it. 00:28:08.063 [2024-11-28 08:26:50.033250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.063 [2024-11-28 08:26:50.033265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.033350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.033365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.033467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.033479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.033553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.033564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.033775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.033786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.033868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.033879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.033966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.033980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.034151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.034162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.034237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.034248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.034379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.034389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.034538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.034550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.034626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.034637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.034703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.034715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.034872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.034883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.035085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.035097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.035186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.035197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.035360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.035371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.035529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.035541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.035620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.035631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.035781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.035792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.035893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.035906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.036103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.036115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.036258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.036271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.036366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.036377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.036491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.036502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.036638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.036650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.036786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.036798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.037024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.037039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.037191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.037202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.037290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.037301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.037387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.037399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.037471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.037483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.037574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.037585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.037749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.037770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.037922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.037937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.038055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.038072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.038168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.038185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.038278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.038293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.064 qpair failed and we were unable to recover it. 00:28:08.064 [2024-11-28 08:26:50.038538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.064 [2024-11-28 08:26:50.038554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.038656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.038675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.038837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.038852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.039027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.039044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.039133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.039147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.039313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.039328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.039418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.039433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.039539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.039552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.039707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.039723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.039864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.039875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.039951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.039962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.040055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.040066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.040206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.040218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.040305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.040316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.040473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.040485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.040620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.040631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.040700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.040711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.040849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.040861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.041010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.041022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.041181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.041192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.041329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.041340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.041481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.041492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.041571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.041582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.041716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.041728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.041807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.041818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.041990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.042002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.042176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.042187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.042257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.042269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.042413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.042424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.042502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.042516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.042658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.042670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.042929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.042943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.043092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.043105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.043183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.043194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.043370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.043384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.043500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.043531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.043633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.043656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.043741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.043762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.065 qpair failed and we were unable to recover it. 00:28:08.065 [2024-11-28 08:26:50.043852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.065 [2024-11-28 08:26:50.043864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.043941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.043958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.044041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.044052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.044135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.044148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.044220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.044233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.044368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.044381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.044445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.044456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.044524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.044536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.044617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.044628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.044775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.044787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.044851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.044865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.044946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.044962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.045104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.045115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.045183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.045194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.045277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.045288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.045429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.045440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.045519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.045530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.045606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.045617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.045704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.045715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.045786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.045797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.045871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.045882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.045972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.045983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.046057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.046068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.046165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.046178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.046318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.046330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.046404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.046415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.046478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.046488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.046620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.046632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.046698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.046708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.046786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.046798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.046880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.046891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.046977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.046988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.047123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.047135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.047205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.047216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.047284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.047295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.047366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.047377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.047452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.047463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.047567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.047580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.066 qpair failed and we were unable to recover it. 00:28:08.066 [2024-11-28 08:26:50.047664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.066 [2024-11-28 08:26:50.047674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.047746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.047759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.047834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.047845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.047902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.047912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.048063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.048075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.048209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.048220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.048295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.048306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.048373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.048384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.048531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.048542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.048609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.048619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.048753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.048764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.048961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.048972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.049037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.049048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.049189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.049200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.049269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.049280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.049372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.049383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.049465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.049475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.049607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.049618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.049769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.049780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.049913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.049924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.050063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.050074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.050141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.050152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.050217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.050228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.050319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.050330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.050407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.050418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.050487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.050498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.050646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.050657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.050735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.050746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.050809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.050820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.050966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.050979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.051060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.051071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.051146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.051157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.051232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.051243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.067 [2024-11-28 08:26:50.051331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.067 [2024-11-28 08:26:50.051342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.067 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.051412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.051423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.051503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.051514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.051575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.051586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.051732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.051744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.051875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.051886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.052013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.052031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.052184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.052196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.052268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.052279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.052424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.052435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.052507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.052518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.052589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.052601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.052732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.052744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.052838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.052849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.052911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.052922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.052987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.052998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.053136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.053147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.053211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.053222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.053296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.053307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.053372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.053383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.053456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.053468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.053539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.053551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.053614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.053625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.053757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.053768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.053828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.053839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.053918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.053929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.054005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.054016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.054089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.054100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.054250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.054261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.054396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.054407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.054468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.054484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.054615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.054626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.054700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.054711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.054797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.054808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.054877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.054888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.054973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.054985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.055080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.055094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.055172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.055183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.068 [2024-11-28 08:26:50.055278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.068 [2024-11-28 08:26:50.055289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.068 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.055361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.055374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.055452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.055463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.055529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.055541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.055615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.055632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.055750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.055789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.055906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.055933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.056041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.056057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.056147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.056167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.056263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.056278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.056354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.056369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.056474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.056489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.056572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.056587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.056721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.056758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.056963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.057012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.057129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.057201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.057319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.057333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.057501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.057553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.057696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.057711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.057815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.057832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.057920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.057933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.058020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.058036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.058142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.058155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.058241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.058253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.058321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.058332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.058400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.058411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.058485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.058496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.058570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.058582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.058653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.058664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.058746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.058759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.058838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.058849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.058918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.058930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.059016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.059028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.059102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.059114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.059183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.059196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.059287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.059314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.059421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.059440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.059517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.059532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.059611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.069 [2024-11-28 08:26:50.059626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.069 qpair failed and we were unable to recover it. 00:28:08.069 [2024-11-28 08:26:50.059710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.059726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.059811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.059827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.059911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.059925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.060029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.060045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.060124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.060139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.060208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.060223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.060299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.060315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.060398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.060413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.060492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.060508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.060649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.060664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.060817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.060833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.060909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.060924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.061006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.061021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.061122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.061137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.061211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.061226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.061377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.061392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.061467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.061483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.061559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.061574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.061652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.061667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.061805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.061820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.061902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.061916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.062007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.062019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.062085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.062097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.062162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.062175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.062358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.062370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.062454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.062464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.062610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.062621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.062700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.062711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.062788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.062799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.062869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.062880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.063022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.063033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.063100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.063112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.063205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.063215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.063360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.063372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.063439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.063449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.063578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.063590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.063660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.063672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.063750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.070 [2024-11-28 08:26:50.063761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.070 qpair failed and we were unable to recover it. 00:28:08.070 [2024-11-28 08:26:50.063890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.063901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.063986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.063997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.064069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.064080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.064216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.064228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.064317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.064328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.064392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.064404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.064480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.064490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.064578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.064590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.064666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.064677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.064809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.064819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.064993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.065005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.065069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.065080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.065216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.065227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.065296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.065306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.065372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.065383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.065453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.065464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.065600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.065611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.065765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.065776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.065907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.065918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.066012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.066024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.066167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.066178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.066309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.066320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.066457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.066468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.066558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.066569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.066639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.066649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.066713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.066729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.066797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.066808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.066895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.066906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.067103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.067117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.067192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.067203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.067348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.067360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.067582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.067595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.067814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.067826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.067903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.067914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.067987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.067998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.068137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.068148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.068225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.068236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.068303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.071 [2024-11-28 08:26:50.068315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.071 qpair failed and we were unable to recover it. 00:28:08.071 [2024-11-28 08:26:50.068401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.068412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.068555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.068566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.068695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.068707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.068770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.068781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.068868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.068879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.068981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.068992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.069071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.069082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.069169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.069180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.069320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.069331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.069465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.069476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.069547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.069557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.069620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.069631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.069762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.069773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.069913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.069925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.070009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.070020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.070110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.070121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.070196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.070207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.070341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.070352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.070425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.070435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.070572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.070582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.070644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.070655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.070744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.070754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.070975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.070986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.071048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.071060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.071134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.071145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.071225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.071236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.071328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.071339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.071407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.071420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.071497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.071513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.071582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.071594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.071660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.071671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.071734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.071745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.071820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.071832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.072 qpair failed and we were unable to recover it. 00:28:08.072 [2024-11-28 08:26:50.071912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.072 [2024-11-28 08:26:50.071922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.072062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.072074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.072150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.072160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.072226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.072238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.072395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.072406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.072482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.072493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.072635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.072646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.072713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.072724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.072789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.072799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.072932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.072943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.073016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.073027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.073161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.073172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.073256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.073267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.073352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.073363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.073428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.073439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.073504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.073516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.073648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.073660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.073731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.073742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.073821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.073831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.073979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.073991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.074143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.074155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.074253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.074264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.074347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.074358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.074423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.074434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.074505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.074516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.074591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.074602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.074684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.074695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.074768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.074779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.074910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.074921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.075066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.075077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.075207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.075218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.075354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.075365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.075565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.075576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.075779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.075790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.075866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.075879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.075955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.075967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.076037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.076048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.076180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.073 [2024-11-28 08:26:50.076192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.073 qpair failed and we were unable to recover it. 00:28:08.073 [2024-11-28 08:26:50.076279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.076290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.076365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.076376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.076455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.076467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.076600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.076612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.076744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.076755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.076833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.076844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.076910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.076921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.076996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.077008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.077092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.077104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.077195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.077206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.077282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.077294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.077427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.077438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.077504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.077515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.077661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.077673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.077760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.077772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.077909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.077920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.077989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.078000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.078134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.078147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.078295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.078306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.078394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.078404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.078506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.078517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.078676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.078689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.078776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.078787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.078871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.078882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.079015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.079027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.079167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.079178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.079312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.079323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.079401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.079412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.079554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.079565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.079640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.079651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.079795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.079805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.079894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.079905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.079999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.080010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.080142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.080154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.080294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.080305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.080394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.080405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.080469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.080482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.080578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.080589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.074 qpair failed and we were unable to recover it. 00:28:08.074 [2024-11-28 08:26:50.080659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.074 [2024-11-28 08:26:50.080670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.080815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.080827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.080984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.080996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.081073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.081084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.081152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.081164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.081257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.081268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.081337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.081347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.081422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.081434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.081637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.081648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.081787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.081797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.081874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.081884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.081955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.081966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.082057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.082068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.082145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.082156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.082230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.082241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.082320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.082331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.082465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.082476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.082559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.082569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.082650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.082661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.082742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.082753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.082830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.082841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.082910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.082921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.082994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.083073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.083178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.083260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.083338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.083432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.083505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.083589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.083664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.083806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.083881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.083956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.083967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.084037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.084048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.084124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.084135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.084204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.084216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.084359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.084370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.075 qpair failed and we were unable to recover it. 00:28:08.075 [2024-11-28 08:26:50.084509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.075 [2024-11-28 08:26:50.084522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.084601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.084612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.084695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.084707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.084802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.084812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.084877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.084887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.085024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.085036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.085170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.085181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.085260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.085271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.085338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.085349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.085504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.085515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.085580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.085591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.085725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.085736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.085812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.085823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.085903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.085914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.086051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.086063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.086210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.086221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.086300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.086312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.086384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.086395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.086459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.086470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.086547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.086558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.086620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.086631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.086699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.086710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.086805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.086816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.086987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.086998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.087077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.087088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.087165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.087176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.087248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.087259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.087422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.087434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.087493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.087504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.087569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.087580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.087663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.087674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.087747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.087758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.087856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.087867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.087997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.076 [2024-11-28 08:26:50.088008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.076 qpair failed and we were unable to recover it. 00:28:08.076 [2024-11-28 08:26:50.088087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.088098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.088157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.088168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.088304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.088316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.088449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.088460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.088537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.088548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.088683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.088694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.088824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.088838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.088987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.088999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.089060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.089070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.089230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.089240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.089313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.089323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.089458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.089469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.089564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.089576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.089637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.089648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.089709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.089720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.089898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.089909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.090086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.090098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.090175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.090186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.090253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.090265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.090344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.090355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.090439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.090450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.090582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.090594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.090686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.090697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.090867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.090879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.091073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.091085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.091223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.091235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.091394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.091405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.091549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.091561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.091695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.091706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.091856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.091868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.092002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.092013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.092076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.092087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.092176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.092188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.092321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.092333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.092473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.092484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.092556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.092567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.092652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.092663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.077 qpair failed and we were unable to recover it. 00:28:08.077 [2024-11-28 08:26:50.092728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.077 [2024-11-28 08:26:50.092739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.092821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.092832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.092899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.092909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.092976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.092987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.093076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.093087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.093149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.093160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.093243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.093255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.093396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.093408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.093501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.093512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.093578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.093591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.093724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.093736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.093795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.093806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.094002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.094014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.094088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.094099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.094173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.094184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.094272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.094282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.094371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.094381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.094455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.094466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.094542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.094553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.094628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.094639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.094792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.094803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.094884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.094895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.094963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.094975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.095123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.095135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.095277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.095287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.095435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.095445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.095579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.095590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.095722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.095733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.095833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.095844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.095925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.095935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.096115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.096127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.096256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.096266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.096429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.096440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.096510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.096520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.096662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.096672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.096803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.096813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.096878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.096889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.096982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.096993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.078 qpair failed and we were unable to recover it. 00:28:08.078 [2024-11-28 08:26:50.097149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.078 [2024-11-28 08:26:50.097161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.097301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.097312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.097383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.097395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.097460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.097471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.097534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.097545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.097634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.097646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.097718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.097729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.097937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.097952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.098100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.098111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.098255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.098266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.098342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.098353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.098431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.098444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.098574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.098585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.098666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.098676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.098752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.098763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.098840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.098852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.098991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.099003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.099082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.099093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.099260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.099272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.099429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.099440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.099517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.099528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.099607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.099619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.099686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.099697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.099875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.099886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.099980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.099992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.100062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.100073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.100221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.100232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.100298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.100310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.100447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.100457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.100597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.100608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.100681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.100692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.100823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.100834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.100910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.100921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.101004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.101016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.101084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.101094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.101243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.101255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.101391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.101402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.101588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.101599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.079 qpair failed and we were unable to recover it. 00:28:08.079 [2024-11-28 08:26:50.101687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.079 [2024-11-28 08:26:50.101700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.101782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.101793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.101880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.101890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.102022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.102033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.102244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.102256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.102320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.102331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.102402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.102413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.102498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.102509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.102578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.102597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.102815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.102828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.102900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.102911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.103055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.103067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.103158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.103169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.103242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.103255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.103389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.103401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.103468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.103479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.103638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.103649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.103743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.103755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.103853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.103866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.103954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.103966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.104045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.104055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.104197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.104209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.104272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.104283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.104444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.104455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.104599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.104611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.104812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.104823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.104903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.104915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.104991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.105003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.105085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.105096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.105255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.105267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.105410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.105422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.105521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.105532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.105619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.105634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.105772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.105783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.105860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.105871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.106042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.106055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.106135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.106145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.106278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.080 [2024-11-28 08:26:50.106290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.080 qpair failed and we were unable to recover it. 00:28:08.080 [2024-11-28 08:26:50.106375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.106385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.106516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.106526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.106613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.106639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.106818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.106835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.106908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.106924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.107071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.107087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.107153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.107167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.107323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.107338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.107415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.107430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.107504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.107519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.107659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.107674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.107851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.107864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.107945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.107959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.108184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.108195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.108260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.108283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.108413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.108423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.108569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.108580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.108726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.108736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.108805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.108815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.108903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.108913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.109010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.109021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.109152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.109163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.109292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.109303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.109442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.109453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.109540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.109551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.109624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.109634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.109731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.109741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.109819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.109830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.109910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.109921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.109989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.110000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.110063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:08.081 [2024-11-28 08:26:50.110074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.110204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.110215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.081 qpair failed and we were unable to recover it. 00:28:08.081 [2024-11-28 08:26:50.110342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.081 [2024-11-28 08:26:50.110353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:08.082 [2024-11-28 08:26:50.110439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.110450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.110596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.110607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.110697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.110707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:08.082 [2024-11-28 08:26:50.110780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.110791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.110882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.110893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.110957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.110967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:08.082 [2024-11-28 08:26:50.111170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.111182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.111309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.111320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.082 [2024-11-28 08:26:50.111466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.111477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.111545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.111555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.111701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.111713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.111785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.111795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.111855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.111866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.111958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.111970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.112099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.112111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.112188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.112198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.112334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.112346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.112475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.112486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.112559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.112569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.112737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.112749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.112819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.112833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.112912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.112923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.113089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.113101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.113180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.113190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.113332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.113343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.113475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.113487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.113545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.113556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.113689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.113702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.113844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.113854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.113935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.113950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.114012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.114023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.114171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.114184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.114261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.114272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.114427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.114438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.114519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.114530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.082 [2024-11-28 08:26:50.114719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.082 [2024-11-28 08:26:50.114731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.082 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.114814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.114825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.114959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.114972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.115106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.115118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.115191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.115203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.115265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.115277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.115354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.115365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.115432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.115443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.115524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.115536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.115618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.115629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.115709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.115720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.115866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.115876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.115954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.115968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.116067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.116078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.116156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.116166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.116252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.116264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.116333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.116343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.116409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.116419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.116489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.116500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.116558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.116568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.116648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.116659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.116740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.116751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.116837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.116848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.116926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.116936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.117025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.117037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.117126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.117138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.117221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.117231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.117301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.117311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.117445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.117456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.117630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.117641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.117713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.117725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.117798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.117810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.117958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.117970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.118065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.118076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.118162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.118173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.118262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.118273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.118327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.118338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.118417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.118428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.083 [2024-11-28 08:26:50.118497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.083 [2024-11-28 08:26:50.118508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.083 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.118584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.118596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.118729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.118741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.118814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.118827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.118910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.118921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.119002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.119015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.119079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.119089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.119146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.119156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.119226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.119239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.119307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.119317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.119388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.119399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.119536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.119547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.119622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.119632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.119692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.119702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.119773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.119785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.119916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.119928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.120102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.120114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.120180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.120191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.120266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.120277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.120349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.120359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.120432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.120443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.120518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.120528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.120602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.120613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.120684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.120695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.120830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.120841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.120905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.120916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.121013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.121026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.121112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.121123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.121200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.121213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.121379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.121391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.121524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.121535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.121600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.121612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.121677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.121689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.121756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.121766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.121858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.121869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.122005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.122018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.122085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.122097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.122178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.122189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.084 [2024-11-28 08:26:50.122294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.084 [2024-11-28 08:26:50.122306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.084 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.122386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.122398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.122536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.122547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.122614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.122625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.122710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.122722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.122837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.122848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.122976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.122989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.123128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.123140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.123216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.123228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.123298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.123309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.123372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.123383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.123459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.123469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.123598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.123609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.123675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.123687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.123753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.123764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.123826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.123837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.123901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.123915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.124063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.124075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.124166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.124178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.124262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.124274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.124344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.124355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.124423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.124435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.124512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.124522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.124616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.124627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.124717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.124729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.124794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.124806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.124875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.124886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.124989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.125001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.125089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.125101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.125230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.125241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.125333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.125344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.125417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.125428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.125501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.125512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.085 [2024-11-28 08:26:50.125577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.085 [2024-11-28 08:26:50.125588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.085 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.125714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.125726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.125863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.125874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.125942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.125956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.126099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.126110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.126254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.126265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.126345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.126357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.126497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.126509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.126650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.126660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.126792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.126803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.126880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.126891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.126962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.126973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.127039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.127050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.127124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.127136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.127218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.127229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.127303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.127314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.127426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.127437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.127593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.127604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.127683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.127696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.127767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.127778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.127845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.127856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.127925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.127936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.128082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.128093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.128166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.128179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.128249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.128260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.128390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.128401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.128486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.128497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.128566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.128578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.128642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.128652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.128727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.128738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.128818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.128829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.128912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.128923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.129001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.129012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.129083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.129094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.129188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.129199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.129268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.129278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.129358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.129371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.129459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.129470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.129545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.129555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.129628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.129639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.129709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.129719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.129790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.086 [2024-11-28 08:26:50.129801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.086 qpair failed and we were unable to recover it. 00:28:08.086 [2024-11-28 08:26:50.129945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.129961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.130021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.130032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.130165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.130177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.130248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.130260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.130340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.130352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.130426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.130437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.130513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.130524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.130598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.130610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.130693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.130704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.130770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.130781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.130843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.130854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.130932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.130943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.131082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.131093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.131156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.131167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.131243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.131255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.131324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.131335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.131492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.131503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.131567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.131578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.131645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.131656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.131724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.131735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.131811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.131823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.131903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.131918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.131985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.131997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.132062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.132073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.132143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.132153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.132292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.132303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.132378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.132389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.132448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.132459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.132538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.132549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.132626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.132636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.132718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.132731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.132798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.132809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.132894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.132905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.133037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.133049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.133118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.133128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.133201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.133214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.133294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.133305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.133373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.133384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.133451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.133462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.133536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.133547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.133685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.087 [2024-11-28 08:26:50.133696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.087 qpair failed and we were unable to recover it. 00:28:08.087 [2024-11-28 08:26:50.133784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.133795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.133870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.133883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.134003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.134015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.134106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.134118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.134189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.134200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.134278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.134290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.134361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.134374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 A controller has encountered a failure and is being reset. 00:28:08.088 [2024-11-28 08:26:50.134489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.134516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.134606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.134622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.134728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.134743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.134817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.134831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.134924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.134939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.135091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.135106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.135179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.135194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.135276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.135291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.135359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.135374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.135448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.135463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.135540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.135555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.135647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.135662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.135735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.135750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.135841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.135860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.135938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.135960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.136072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.136087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.136177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.136192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.136288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.136304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.136374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.136389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.136466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.136479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.136624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.136635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.136712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.136723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.136801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.136812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.136882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.136893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.136964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.136976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.137043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.137054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.137135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.137146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.137210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.137221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.137288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.137299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.137370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.137381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.137440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.137451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.137533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.137544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.137609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.137622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.137700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.137711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.137859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.137871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.137943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.137958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.088 [2024-11-28 08:26:50.138028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.088 [2024-11-28 08:26:50.138039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.088 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.138106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.138118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.138192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.138203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.138271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.138282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.138347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.138357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.138531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.138542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.138610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.138621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.138696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.138707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.138776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.138786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.138921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.138932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.139028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.139115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.139196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.139277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.139375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.139459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.139600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.139688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.139767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.139841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.139921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.139995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.140008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.140074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.140085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.140151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.140162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.140295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.140306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.140377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.140388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.140455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.140467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.140636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.140647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.140712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.140723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.140923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.140934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.141066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.141077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.141163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.141175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.141257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.141268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.141330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.141341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.141425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.141436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.141507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.141518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.141596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.141607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.141677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.141688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.141765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.141776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.089 [2024-11-28 08:26:50.141844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.089 [2024-11-28 08:26:50.141855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.089 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.141926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.141937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.142931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.142942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.143021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.143101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.143173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.143270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.143345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.143425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.143500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.143577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.143650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.143738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.143879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.143990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.144001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.144076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.144087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.144229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.144241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.144329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.090 [2024-11-28 08:26:50.144340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.090 qpair failed and we were unable to recover it. 00:28:08.090 [2024-11-28 08:26:50.144480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.144491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.144572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.144583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.144660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.144671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.144752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.144763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.144830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.144840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.144905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.144915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.145977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.145988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.146050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.146061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.146126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.146137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.146213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.146224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.146285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.146296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.146366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.146376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.146554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.146566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.091 [2024-11-28 08:26:50.146629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.146642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.146701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.146712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.146775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.146786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.146866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.146877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.147024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:08.091 [2024-11-28 08:26:50.147036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.147141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.147151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.147215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.147226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.147291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.147303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.147383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.091 [2024-11-28 08:26:50.147396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.147469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.147480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.147555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.147565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-11-28 08:26:50.147641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.147654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.091 [2024-11-28 08:26:50.147725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.091 [2024-11-28 08:26:50.147737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.147801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.147812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.147951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.147962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.148117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.148127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.148209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.148220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.148291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.148302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.148370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.148381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.148453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.148464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.148552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.148564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.148642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.148652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.148729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.148740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.148806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.148817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.148886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.148896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.148972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.148983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.149045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.149056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.149119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.149129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.149270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.149280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.149354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.149364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.149442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.149453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.149520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.149530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.149601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.149611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.149686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.149698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.149763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.149774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.149840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.149851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.149979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.149991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.150064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.150074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.150153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.150163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.150226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.150237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.150309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.150320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.150388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.150398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.150477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.150488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.150624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.150638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.150718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.150729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.150811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.150823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.150884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.150895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.150959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.150970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.151043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.151054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.151137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.092 [2024-11-28 08:26:50.151148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-11-28 08:26:50.151213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.151224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.151289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.151299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.151375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.151386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.151455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.151466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.151530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.151541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.151603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.151614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.151677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.151688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.151773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.151784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.151855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.151866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.151946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.151961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.152972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.152984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.153957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.153973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.154037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.154048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.154115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.154126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.154193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.154204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.154279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.093 [2024-11-28 08:26:50.154290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-11-28 08:26:50.154357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.154369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.154549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.154559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.154625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.154636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.154712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.154723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.154787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.154797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.154870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.154881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.155077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.155089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.155156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.155167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.155234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.155245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.155326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.155337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.155411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.155422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.155493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.155504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.155613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.155625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.155686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.155696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.155774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.155786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.155873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.155884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.155971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.155982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.156047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.156058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.156163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.156174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.156235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.156245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.156375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.156386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.156520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.156531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.156633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.156659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.156765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.156780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.156942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.156962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.157063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.157079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.157171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.157186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.157267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.094 [2024-11-28 08:26:50.157282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.094 qpair failed and we were unable to recover it. 00:28:08.094 [2024-11-28 08:26:50.157351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.157366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.157455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.157470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.157561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.157575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.157664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.157678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.157848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.157863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.157943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.157999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.158078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.158093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.158250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.158264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.158361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.158376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.158456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.158471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.158582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.158597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.158698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.158713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.158786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.158801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.158880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.158894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.158976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.158992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.159066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.159080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.159164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.159179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.159281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.159296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.159367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.159382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.159470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.159484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.159570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.159585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.159667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.159684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.159766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.159780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.159855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.159869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.159960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.159976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.160069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.160084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.160163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.160178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.160247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.160262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.160345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.160359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.160450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.160465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.160555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.160569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.160727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.160742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.160817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.160832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.160974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.160989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.161070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.161085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.161181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.161196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.161265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.161279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.161355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.095 [2024-11-28 08:26:50.161369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.095 qpair failed and we were unable to recover it. 00:28:08.095 [2024-11-28 08:26:50.161443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.161457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.161541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.161555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.161628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.161644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.161787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.161801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.161875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.161889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.162037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.162052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.162193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.162208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.162292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.162307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.162456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.162471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.162545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.162560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.162632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.162649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.162734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.162749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.162833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.162848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.162936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.162955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.163032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.163046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.163132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.163147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.163227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.163242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.163319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.163334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.163410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.163424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.163498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.163512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.163693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.163705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.163796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.163806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.163891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.163902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.163974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.163985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.164153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.164164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.164299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.164310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.164391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.164401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.164551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.164562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.164628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.164638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.164716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.164727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.164807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.164818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.164893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.164904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.165047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.165058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.165193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.165204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.165270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.165280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.165352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.165363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.165508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.165519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.165584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.165597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.096 qpair failed and we were unable to recover it. 00:28:08.096 [2024-11-28 08:26:50.165664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.096 [2024-11-28 08:26:50.165675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.165738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.165749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.165803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.165814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.165902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.165913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.165986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.165998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.166065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.166076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.166158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.166169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.166299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.166310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.166406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.166417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.166489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.166500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.166631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.166641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.166767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.166778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.166844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.166855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.166923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.166933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.167034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.167045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.167111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.167122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.167199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.167210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.167294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.167305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.167379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.167389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.167462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.167473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.167564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.167575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.167713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.167724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.167793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.167804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.167875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.167886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.167952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.167963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.168028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.168039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.168125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.168142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.168215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.168229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.168331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.168346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.168422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.168436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.168517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.168531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.168604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.168619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.168697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.168711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.168792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.168807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.168895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.168907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.168980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.168991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.169078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.169089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.169151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.169162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.097 [2024-11-28 08:26:50.169235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.097 [2024-11-28 08:26:50.169246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.097 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.169380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.169391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.169463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.169474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.169606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.169617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.169683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.169694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.169783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.169794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.169856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.169867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.169956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.169967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.170043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.170054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.170123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.170134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.170198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.170209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.170285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.170295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.170377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.170387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.170457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.170468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.170536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.170547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.170614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.170625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.170694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.170705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.170775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.170785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.170923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.170934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.171038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.171050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.171130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.171142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.171208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.171218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.171304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.171314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.171446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.171457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.171530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.171542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.171608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.171619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.171681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.171692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.171775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.171786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.171925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.171939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.172009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.172020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.172170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.172181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.172263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.172274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.172406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.172417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.172490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.172501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.172559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.172570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.172639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.172649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.172724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.172735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.172897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.172909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.098 [2024-11-28 08:26:50.172977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.098 [2024-11-28 08:26:50.172989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.098 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.173191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.173202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.173283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.173294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.173363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.173374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.173454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.173465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.173536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.173548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.173613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.173624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.173702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.173713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.173878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.173889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.173964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.173975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.174109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.174120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.174251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.174262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.174354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.174365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.174447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.174458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.174546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.174557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.174627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.174638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.174798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.174809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.174891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.174903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.174977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.174988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.175069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.175081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.175151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.175162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.175241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.175253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.175387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.175399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.175483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.175493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.175561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.175571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.175653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.175663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.175822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.175834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.175896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.175907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.175976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.175987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.176140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.176151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.176312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.176325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.176393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.176403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.176473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.176484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.176565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.099 [2024-11-28 08:26:50.176576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.099 qpair failed and we were unable to recover it. 00:28:08.099 [2024-11-28 08:26:50.176638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.176649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.176747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.176758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.176836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.176847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.176925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.176936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.177011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.177022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.177220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.177232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.177309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.177319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.177395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.177406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.177617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.177629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.177793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.177805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.177881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.177892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.177981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.177992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.178074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.178085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.178175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.178186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.178260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.178271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.178328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.178338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.178484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.178495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.178573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.178583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.178639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.178651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.178785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.178796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.178858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.178879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.178968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.178979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.179058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.179068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.179149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.179160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.179293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.179304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.179385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.179396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.179472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.179483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.179550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.179562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.179634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.179645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.179709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.179720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.179787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.179798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.179932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.179944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.180013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.180024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.180097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.180108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.180270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.180282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.180353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.180363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.180442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.100 [2024-11-28 08:26:50.180458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.100 qpair failed and we were unable to recover it. 00:28:08.100 [2024-11-28 08:26:50.180526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.180536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.180674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.180685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.180771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.180781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 Malloc0 00:28:08.101 [2024-11-28 08:26:50.180918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.180928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.180996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.181007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.181076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.181087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.181173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.181183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.181245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.181256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.181336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.181346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.181413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.181423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.101 [2024-11-28 08:26:50.181575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.181591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.181736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.181746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.181898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.181926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:08.101 [2024-11-28 08:26:50.182019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.182035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.182183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.182198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.101 [2024-11-28 08:26:50.182338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.182353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.182430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.182444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.182596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.101 [2024-11-28 08:26:50.182610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.182686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.182701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.182834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.182849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.182997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.183013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.183170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.183185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.183259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.183274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.183368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.183382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c3c000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.183528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.183540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.183618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.183628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.183696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.183706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.183769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.183780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.183874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.183885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.183953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.183964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.184032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.184042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.184174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.184185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.184253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.184263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.184332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.184341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.184419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.184429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.184508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.101 [2024-11-28 08:26:50.184519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.101 qpair failed and we were unable to recover it. 00:28:08.101 [2024-11-28 08:26:50.184593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.184603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.184668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.184678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.184749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.184759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.184859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.184870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.184945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.184959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.185024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.185034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.185104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.185114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.185193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.185203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.185268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.185279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.185352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.185363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.185432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.185443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.185583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.185593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.185658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.185668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.185808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.185818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.185896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.185907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.185975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.185986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.186124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.186135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.186200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.186210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.186369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.186379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.186522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.186532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.186628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.186639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.186703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.186713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.186783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.186793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.186874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.186885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.186959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.186970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.187981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.187993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.188068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.102 [2024-11-28 08:26:50.188079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.102 qpair failed and we were unable to recover it. 00:28:08.102 [2024-11-28 08:26:50.188154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.188164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.188292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.188303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.188436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.188446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.188500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.103 [2024-11-28 08:26:50.188597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.188610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.188751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.188761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.188894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.188904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.188993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.189004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.189151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.189162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.189257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.189267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.189416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.189426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.189504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.189515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.189598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.189608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.189761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.189772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.189844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.189854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.189998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.190009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.190071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.190082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.190162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.190173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.190264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.190274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.190364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.190374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.190451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.190462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.190666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.190676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.190824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.190835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.190904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.190915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.190992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.191003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.191085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.191096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.191162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.191172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.191250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.191261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.191320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.191331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.191407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.191417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.191558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.191568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.191770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.191781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.191862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.191873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.191943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.191958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.192097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.192107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.192257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.192267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.192349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.192359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.103 qpair failed and we were unable to recover it. 00:28:08.103 [2024-11-28 08:26:50.192439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.103 [2024-11-28 08:26:50.192450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.192591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.192602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.192691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.192702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.192847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.192857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.192925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.192936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.193084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.193095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.193225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.193235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.193322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.193334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.193411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.193421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.193549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.193559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.193650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.193661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.193806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.193816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.193897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.193907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.194057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.194069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.194131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.194141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.194281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.194291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.194362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.194373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.194443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.194454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.194538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.194548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.194620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.194630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.194760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.194771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.194853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.194863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.194996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.195006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.195081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.195091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.195192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.195203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.195296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.195306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.195383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.195394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.195527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.195537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.195617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.195628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.195702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.195713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.195868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.195879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.195953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.195964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.196053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.196064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.196136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.196146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.196299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.196310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.196442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.104 [2024-11-28 08:26:50.196452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.104 qpair failed and we were unable to recover it. 00:28:08.104 [2024-11-28 08:26:50.196603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.196614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.196748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.196758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.196952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.196963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.197094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.197105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.197173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.197184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.105 [2024-11-28 08:26:50.197265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.197276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.197407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.197418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.197564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.197575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:08.105 [2024-11-28 08:26:50.197659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.197670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.197732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.197743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.105 [2024-11-28 08:26:50.197958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.197969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.198115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.198126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.198215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.105 [2024-11-28 08:26:50.198226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.198314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.198325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.198404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.198414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.198574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.198585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.198659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.198670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.198822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.198832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.198921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.198932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.199031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.199041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.199184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.199195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.199262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.199272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.199429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.199440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.199520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.199532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.199675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.199686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.199818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.199828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.199922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.199933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.200007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.200018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.200102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.200113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.200195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.200205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.200422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.200432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.200587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.200597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.200734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.200744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.200812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.200822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.200883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.200894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.200957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.105 [2024-11-28 08:26:50.200968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.105 qpair failed and we were unable to recover it. 00:28:08.105 [2024-11-28 08:26:50.201027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.201038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.201177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.201188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.201260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.201270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.201347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.201358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.201442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.201453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.201519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.201529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.201607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.201617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.201684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.201694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.201760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.201771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.201846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.201856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.202028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.202039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.202113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.202124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.202265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.202275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.202347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.202359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.202423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.202433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.202521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.202531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.202620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.202631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.202712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.202723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.202885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.202895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.202965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.202975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.203040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.203050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.203125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.203136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.203276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.203286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.203380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.203390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.203463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.203474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.203538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.203549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.203612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.203622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.203703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.203713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.203796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.203806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.203892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.203902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.203965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.203976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.204041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.204051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.204140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.204150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.204212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.204222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.204294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.204304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.204369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.204379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.204456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.204466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.204538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.106 [2024-11-28 08:26:50.204548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.106 qpair failed and we were unable to recover it. 00:28:08.106 [2024-11-28 08:26:50.204608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.204618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.204684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.204694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.204800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.204828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.204932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.204965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.205052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.205068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.205141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.205156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.205229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.205244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.107 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.205336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.205354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.205438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.205453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.205523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.205539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:08.107 [2024-11-28 08:26:50.205684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.205700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.205774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.205789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.107 [2024-11-28 08:26:50.205964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.205980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.206066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.206081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c30000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.107 [2024-11-28 08:26:50.206244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.206256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.206344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.206355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.206415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.206427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.206522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.206533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.206599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.206609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.206742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.206753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.206825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.206836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.206899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.206909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.206982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.206993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.207061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.207071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.207151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.207162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.207312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.207322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.207391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.207401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.207540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.207551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.207634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.207644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.207717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.207727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.207803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.207813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.207879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.207890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.207954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.207965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.208039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.208050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.208118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.208128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.208196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.208207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.107 [2024-11-28 08:26:50.208284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.107 [2024-11-28 08:26:50.208294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.107 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.208361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.208371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.208517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.208527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.208660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.208670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.208753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.208766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.208922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.208932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.209004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.209015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.209081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.209091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.209167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.209177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.209241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.209252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.209342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.209352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.209426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.209437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.209570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.209581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.209662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.209672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.209811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.209821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.209887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.209897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.209969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.209980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.210048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.210058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.210200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.210212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.210290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.210300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.210442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.210453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.210528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.210538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.210624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.210634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.210716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.210726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.210791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.210801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.210858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.210868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.210950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.210960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.211033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.211043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.211125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.211135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.211217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.211227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.211315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.211326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6c34000b90 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.211399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.211416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.211499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.108 [2024-11-28 08:26:50.211513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.108 qpair failed and we were unable to recover it. 00:28:08.108 [2024-11-28 08:26:50.211664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.211679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.211769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.211783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.211865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.211880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.212031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.212046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.212126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.212140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.212221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.212235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.212387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.212401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.212495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.212509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.212597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.212612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.212696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.212710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.212785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.212799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.212935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.212953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.213063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.213078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.213223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.213237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.109 [2024-11-28 08:26:50.213325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.213340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.213430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.213445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.213530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.213545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.109 [2024-11-28 08:26:50.213684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.213699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.213786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.213800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.213885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.213898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.109 [2024-11-28 08:26:50.214041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.214057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.214157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.214172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.214248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.214263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.109 [2024-11-28 08:26:50.214347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.214365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.214443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.214458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.214530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.214544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.214643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.214657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.214756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.214770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.214863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.214878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.214975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.214990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.215148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.215162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.215248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.215262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.215340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.215355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.215498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.215512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.215586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.215600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.215684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.215699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.109 [2024-11-28 08:26:50.215771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.109 [2024-11-28 08:26:50.215785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.109 qpair failed and we were unable to recover it. 00:28:08.110 [2024-11-28 08:26:50.215866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-28 08:26:50.215880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 [2024-11-28 08:26:50.215982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-28 08:26:50.215997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 [2024-11-28 08:26:50.216142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-28 08:26:50.216156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 [2024-11-28 08:26:50.216253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-28 08:26:50.216267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 [2024-11-28 08:26:50.216347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-28 08:26:50.216360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 [2024-11-28 08:26:50.216440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-28 08:26:50.216454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 [2024-11-28 08:26:50.216528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.110 [2024-11-28 08:26:50.216543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x991be0 with addr=10.0.0.2, port=4420 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 [2024-11-28 08:26:50.216736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.110 [2024-11-28 08:26:50.219156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.110 [2024-11-28 08:26:50.219256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.110 [2024-11-28 08:26:50.219279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.110 [2024-11-28 08:26:50.219292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.110 [2024-11-28 08:26:50.219302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.110 [2024-11-28 08:26:50.219329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.110 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:08.110 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.110 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.110 [2024-11-28 08:26:50.229019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.110 [2024-11-28 08:26:50.229109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.110 [2024-11-28 08:26:50.229136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.110 [2024-11-28 08:26:50.229146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.110 [2024-11-28 08:26:50.229156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.110 [2024-11-28 08:26:50.229179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.110 08:26:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1519639 00:28:08.110 [2024-11-28 08:26:50.239098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.110 [2024-11-28 08:26:50.239158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.110 [2024-11-28 08:26:50.239174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.110 [2024-11-28 08:26:50.239181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.110 [2024-11-28 08:26:50.239187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.110 [2024-11-28 08:26:50.239202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 [2024-11-28 08:26:50.249076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.110 [2024-11-28 08:26:50.249172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.110 [2024-11-28 08:26:50.249187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.110 [2024-11-28 08:26:50.249194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.110 [2024-11-28 08:26:50.249201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.110 [2024-11-28 08:26:50.249217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 [2024-11-28 08:26:50.259050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.110 [2024-11-28 08:26:50.259113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.110 [2024-11-28 08:26:50.259127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.110 [2024-11-28 08:26:50.259134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.110 [2024-11-28 08:26:50.259140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.110 [2024-11-28 08:26:50.259154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.110 [2024-11-28 08:26:50.269011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.110 [2024-11-28 08:26:50.269070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.110 [2024-11-28 08:26:50.269085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.110 [2024-11-28 08:26:50.269095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.110 [2024-11-28 08:26:50.269101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.110 [2024-11-28 08:26:50.269116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.110 qpair failed and we were unable to recover it. 00:28:08.372 [2024-11-28 08:26:50.279121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.372 [2024-11-28 08:26:50.279194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.372 [2024-11-28 08:26:50.279212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.372 [2024-11-28 08:26:50.279220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.372 [2024-11-28 08:26:50.279226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.372 [2024-11-28 08:26:50.279241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.372 qpair failed and we were unable to recover it. 00:28:08.372 [2024-11-28 08:26:50.289072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.372 [2024-11-28 08:26:50.289132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.289147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.289154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.289160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.289175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.299139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.299201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.299216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.299222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.299228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.299243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.309214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.309270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.309284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.309291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.309297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.309318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.319225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.319281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.319296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.319302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.319308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.319323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.329178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.329239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.329255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.329261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.329268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.329283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.339262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.339318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.339333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.339340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.339346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.339360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.349257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.349317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.349332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.349339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.349345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.349359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.359315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.359375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.359390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.359396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.359402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.359417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.369275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.369335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.369351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.369358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.369364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.369380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.379370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.379427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.379442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.379449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.379455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.379470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.389403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.389471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.389486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.389493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.389499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.389514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.399351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.399409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.399424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.399434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.399440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.399456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.409400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.409479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.409495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.373 [2024-11-28 08:26:50.409502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.373 [2024-11-28 08:26:50.409508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.373 [2024-11-28 08:26:50.409523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.373 qpair failed and we were unable to recover it. 00:28:08.373 [2024-11-28 08:26:50.419423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.373 [2024-11-28 08:26:50.419482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.373 [2024-11-28 08:26:50.419497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.419503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.419510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.419524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.429545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.429601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.429617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.429624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.429630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.429645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.439496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.439556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.439571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.439578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.439584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.439602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.449579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.449637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.449652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.449659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.449665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.449680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.459615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.459671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.459686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.459692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.459698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.459712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.469629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.469708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.469723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.469730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.469737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.469751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.479586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.479643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.479658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.479664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.479670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.479685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.489637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.489703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.489717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.489724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.489730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.489745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.499723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.499785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.499799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.499806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.499812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.499827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.509737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.509797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.509811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.509818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.509824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.509839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.519777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.519832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.519846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.519854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.519860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.519875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.529731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.529795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.529810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.529820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.529826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.529841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.539827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.374 [2024-11-28 08:26:50.539885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.374 [2024-11-28 08:26:50.539900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.374 [2024-11-28 08:26:50.539906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.374 [2024-11-28 08:26:50.539912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.374 [2024-11-28 08:26:50.539927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.374 qpair failed and we were unable to recover it. 00:28:08.374 [2024-11-28 08:26:50.549853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.375 [2024-11-28 08:26:50.549911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.375 [2024-11-28 08:26:50.549925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.375 [2024-11-28 08:26:50.549932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.375 [2024-11-28 08:26:50.549938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.375 [2024-11-28 08:26:50.549956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.375 qpair failed and we were unable to recover it. 00:28:08.375 [2024-11-28 08:26:50.559901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.375 [2024-11-28 08:26:50.559957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.375 [2024-11-28 08:26:50.559971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.375 [2024-11-28 08:26:50.559978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.375 [2024-11-28 08:26:50.559984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.375 [2024-11-28 08:26:50.559999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.375 qpair failed and we were unable to recover it. 00:28:08.375 [2024-11-28 08:26:50.569940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.375 [2024-11-28 08:26:50.570005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.375 [2024-11-28 08:26:50.570019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.375 [2024-11-28 08:26:50.570025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.375 [2024-11-28 08:26:50.570031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.375 [2024-11-28 08:26:50.570049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.375 qpair failed and we were unable to recover it. 00:28:08.375 [2024-11-28 08:26:50.579963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.375 [2024-11-28 08:26:50.580023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.375 [2024-11-28 08:26:50.580037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.375 [2024-11-28 08:26:50.580043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.375 [2024-11-28 08:26:50.580049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.375 [2024-11-28 08:26:50.580063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.375 qpair failed and we were unable to recover it. 00:28:08.375 [2024-11-28 08:26:50.590026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.375 [2024-11-28 08:26:50.590087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.375 [2024-11-28 08:26:50.590101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.375 [2024-11-28 08:26:50.590108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.375 [2024-11-28 08:26:50.590115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.375 [2024-11-28 08:26:50.590129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.375 qpair failed and we were unable to recover it. 00:28:08.375 [2024-11-28 08:26:50.599995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.375 [2024-11-28 08:26:50.600046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.375 [2024-11-28 08:26:50.600060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.375 [2024-11-28 08:26:50.600067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.375 [2024-11-28 08:26:50.600073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.375 [2024-11-28 08:26:50.600087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.375 qpair failed and we were unable to recover it. 00:28:08.375 [2024-11-28 08:26:50.610034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.375 [2024-11-28 08:26:50.610100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.375 [2024-11-28 08:26:50.610114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.375 [2024-11-28 08:26:50.610121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.375 [2024-11-28 08:26:50.610127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.375 [2024-11-28 08:26:50.610141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.375 qpair failed and we were unable to recover it. 00:28:08.375 [2024-11-28 08:26:50.620052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.375 [2024-11-28 08:26:50.620131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.375 [2024-11-28 08:26:50.620147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.375 [2024-11-28 08:26:50.620154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.375 [2024-11-28 08:26:50.620160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.375 [2024-11-28 08:26:50.620174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.375 qpair failed and we were unable to recover it. 00:28:08.375 [2024-11-28 08:26:50.630015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.375 [2024-11-28 08:26:50.630073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.375 [2024-11-28 08:26:50.630087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.375 [2024-11-28 08:26:50.630094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.375 [2024-11-28 08:26:50.630101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.375 [2024-11-28 08:26:50.630115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.375 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.640097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.640154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.640168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.640174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.640180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.640194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.650143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.650212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.650227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.650234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.650240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.650254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.660220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.660312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.660327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.660337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.660344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.660359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.670140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.670208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.670222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.670229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.670235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.670250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.680167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.680222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.680237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.680244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.680250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.680265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.690274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.690337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.690351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.690358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.690364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.690378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.700295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.700352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.700367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.700373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.700380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.700398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.710314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.710374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.710388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.710395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.710401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.710416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.720331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.720391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.720405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.720412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.720418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.720431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.730418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.730517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.730533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.730540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.730546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.730560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.740407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.740469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.740485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.740492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.740498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.740514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.750482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.750587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.750602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.750609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.750615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.637 [2024-11-28 08:26:50.750630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.637 qpair failed and we were unable to recover it. 00:28:08.637 [2024-11-28 08:26:50.760482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.637 [2024-11-28 08:26:50.760550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.637 [2024-11-28 08:26:50.760568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.637 [2024-11-28 08:26:50.760575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.637 [2024-11-28 08:26:50.760582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.760597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.770494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.770554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.770568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.770575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.770581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.770596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.780509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.780564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.780578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.780585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.780591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.780605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.790533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.790592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.790606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.790616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.790623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.790637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.800603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.800659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.800673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.800681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.800686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.800701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.810599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.810659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.810673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.810680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.810685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.810700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.820626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.820684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.820698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.820704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.820710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.820724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.830689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.830744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.830760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.830767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.830773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.830792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.840694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.840751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.840767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.840774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.840780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.840794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.850712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.850774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.850789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.850796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.850802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.850816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.860756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.860815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.860829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.860836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.860842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.860856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.870797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.870907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.870922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.870929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.870936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.870956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.880830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.880889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.880904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.880911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.880917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.638 [2024-11-28 08:26:50.880932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.638 qpair failed and we were unable to recover it. 00:28:08.638 [2024-11-28 08:26:50.890832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.638 [2024-11-28 08:26:50.890894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.638 [2024-11-28 08:26:50.890907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.638 [2024-11-28 08:26:50.890914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.638 [2024-11-28 08:26:50.890920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.639 [2024-11-28 08:26:50.890934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.639 qpair failed and we were unable to recover it. 00:28:08.639 [2024-11-28 08:26:50.900850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.639 [2024-11-28 08:26:50.900908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.639 [2024-11-28 08:26:50.900924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.639 [2024-11-28 08:26:50.900931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.639 [2024-11-28 08:26:50.900937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.639 [2024-11-28 08:26:50.900955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.639 qpair failed and we were unable to recover it. 00:28:08.900 [2024-11-28 08:26:50.910924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.900 [2024-11-28 08:26:50.911017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.900 [2024-11-28 08:26:50.911033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.900 [2024-11-28 08:26:50.911040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.900 [2024-11-28 08:26:50.911046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.900 [2024-11-28 08:26:50.911061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.900 qpair failed and we were unable to recover it. 00:28:08.900 [2024-11-28 08:26:50.920904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.900 [2024-11-28 08:26:50.920966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.900 [2024-11-28 08:26:50.920980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.900 [2024-11-28 08:26:50.920991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.900 [2024-11-28 08:26:50.920997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.900 [2024-11-28 08:26:50.921012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.900 qpair failed and we were unable to recover it. 00:28:08.900 [2024-11-28 08:26:50.930955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.900 [2024-11-28 08:26:50.931012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.900 [2024-11-28 08:26:50.931028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.900 [2024-11-28 08:26:50.931035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.900 [2024-11-28 08:26:50.931040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.900 [2024-11-28 08:26:50.931055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.900 qpair failed and we were unable to recover it. 00:28:08.900 [2024-11-28 08:26:50.940995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.900 [2024-11-28 08:26:50.941097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.900 [2024-11-28 08:26:50.941112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.900 [2024-11-28 08:26:50.941119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.900 [2024-11-28 08:26:50.941124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.900 [2024-11-28 08:26:50.941139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.900 qpair failed and we were unable to recover it. 00:28:08.900 [2024-11-28 08:26:50.951059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.900 [2024-11-28 08:26:50.951135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:50.951150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:50.951157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:50.951163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:50.951179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:50.961026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:50.961085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:50.961099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:50.961105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:50.961112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:50.961130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:50.971071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:50.971183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:50.971198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:50.971205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:50.971211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:50.971225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:50.981038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:50.981098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:50.981113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:50.981120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:50.981126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:50.981140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:50.991147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:50.991203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:50.991218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:50.991225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:50.991231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:50.991246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:51.001053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:51.001111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:51.001125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:51.001132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:51.001139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:51.001153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:51.011207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:51.011271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:51.011285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:51.011292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:51.011298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:51.011312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:51.021247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:51.021321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:51.021335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:51.021341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:51.021348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:51.021362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:51.031222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:51.031284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:51.031299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:51.031306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:51.031312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:51.031327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:51.041231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:51.041316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:51.041331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:51.041338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:51.041344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:51.041359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:51.051382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:51.051461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:51.051478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:51.051490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:51.051496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:51.051513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:51.061315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:51.061378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.901 [2024-11-28 08:26:51.061392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.901 [2024-11-28 08:26:51.061399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.901 [2024-11-28 08:26:51.061405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.901 [2024-11-28 08:26:51.061419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.901 qpair failed and we were unable to recover it. 00:28:08.901 [2024-11-28 08:26:51.071342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.901 [2024-11-28 08:26:51.071404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.902 [2024-11-28 08:26:51.071419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.902 [2024-11-28 08:26:51.071426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.902 [2024-11-28 08:26:51.071432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.902 [2024-11-28 08:26:51.071447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.902 qpair failed and we were unable to recover it. 00:28:08.902 [2024-11-28 08:26:51.081351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.902 [2024-11-28 08:26:51.081411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.902 [2024-11-28 08:26:51.081426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.902 [2024-11-28 08:26:51.081433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.902 [2024-11-28 08:26:51.081439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.902 [2024-11-28 08:26:51.081454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.902 qpair failed and we were unable to recover it. 00:28:08.902 [2024-11-28 08:26:51.091404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.902 [2024-11-28 08:26:51.091465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.902 [2024-11-28 08:26:51.091479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.902 [2024-11-28 08:26:51.091486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.902 [2024-11-28 08:26:51.091492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.902 [2024-11-28 08:26:51.091510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.902 qpair failed and we were unable to recover it. 00:28:08.902 [2024-11-28 08:26:51.101426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.902 [2024-11-28 08:26:51.101484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.902 [2024-11-28 08:26:51.101499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.902 [2024-11-28 08:26:51.101505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.902 [2024-11-28 08:26:51.101513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.902 [2024-11-28 08:26:51.101527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.902 qpair failed and we were unable to recover it. 00:28:08.902 [2024-11-28 08:26:51.111451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.902 [2024-11-28 08:26:51.111508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.902 [2024-11-28 08:26:51.111523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.902 [2024-11-28 08:26:51.111530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.902 [2024-11-28 08:26:51.111536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.902 [2024-11-28 08:26:51.111551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.902 qpair failed and we were unable to recover it. 00:28:08.902 [2024-11-28 08:26:51.121488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.902 [2024-11-28 08:26:51.121546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.902 [2024-11-28 08:26:51.121560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.902 [2024-11-28 08:26:51.121567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.902 [2024-11-28 08:26:51.121574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.902 [2024-11-28 08:26:51.121589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.902 qpair failed and we were unable to recover it. 00:28:08.902 [2024-11-28 08:26:51.131521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.902 [2024-11-28 08:26:51.131581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.902 [2024-11-28 08:26:51.131596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.902 [2024-11-28 08:26:51.131603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.902 [2024-11-28 08:26:51.131609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.902 [2024-11-28 08:26:51.131624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.902 qpair failed and we were unable to recover it. 00:28:08.902 [2024-11-28 08:26:51.141547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.902 [2024-11-28 08:26:51.141606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.902 [2024-11-28 08:26:51.141621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.902 [2024-11-28 08:26:51.141628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.902 [2024-11-28 08:26:51.141634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.902 [2024-11-28 08:26:51.141648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.902 qpair failed and we were unable to recover it. 00:28:08.902 [2024-11-28 08:26:51.151618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.902 [2024-11-28 08:26:51.151680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.902 [2024-11-28 08:26:51.151694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.902 [2024-11-28 08:26:51.151701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.902 [2024-11-28 08:26:51.151707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.902 [2024-11-28 08:26:51.151722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.902 qpair failed and we were unable to recover it. 00:28:08.902 [2024-11-28 08:26:51.161602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.902 [2024-11-28 08:26:51.161664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.902 [2024-11-28 08:26:51.161679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.902 [2024-11-28 08:26:51.161686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.902 [2024-11-28 08:26:51.161692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:08.902 [2024-11-28 08:26:51.161707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:08.902 qpair failed and we were unable to recover it. 00:28:09.163 [2024-11-28 08:26:51.171615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.163 [2024-11-28 08:26:51.171674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.163 [2024-11-28 08:26:51.171688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.163 [2024-11-28 08:26:51.171695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.163 [2024-11-28 08:26:51.171701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.163 [2024-11-28 08:26:51.171715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.163 qpair failed and we were unable to recover it. 00:28:09.163 [2024-11-28 08:26:51.181653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.163 [2024-11-28 08:26:51.181758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.163 [2024-11-28 08:26:51.181773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.163 [2024-11-28 08:26:51.181783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.163 [2024-11-28 08:26:51.181789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.163 [2024-11-28 08:26:51.181804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.163 qpair failed and we were unable to recover it. 00:28:09.163 [2024-11-28 08:26:51.191718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.163 [2024-11-28 08:26:51.191773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.163 [2024-11-28 08:26:51.191787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.163 [2024-11-28 08:26:51.191794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.163 [2024-11-28 08:26:51.191800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.163 [2024-11-28 08:26:51.191815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.163 qpair failed and we were unable to recover it. 00:28:09.163 [2024-11-28 08:26:51.201729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.163 [2024-11-28 08:26:51.201789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.163 [2024-11-28 08:26:51.201803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.163 [2024-11-28 08:26:51.201810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.163 [2024-11-28 08:26:51.201816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.163 [2024-11-28 08:26:51.201831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.163 qpair failed and we were unable to recover it. 00:28:09.163 [2024-11-28 08:26:51.211757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.163 [2024-11-28 08:26:51.211818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.163 [2024-11-28 08:26:51.211832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.163 [2024-11-28 08:26:51.211839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.163 [2024-11-28 08:26:51.211846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.163 [2024-11-28 08:26:51.211860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.163 qpair failed and we were unable to recover it. 00:28:09.163 [2024-11-28 08:26:51.221782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.163 [2024-11-28 08:26:51.221842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.163 [2024-11-28 08:26:51.221856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.163 [2024-11-28 08:26:51.221863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.163 [2024-11-28 08:26:51.221869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.163 [2024-11-28 08:26:51.221887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.163 qpair failed and we were unable to recover it. 00:28:09.163 [2024-11-28 08:26:51.231852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.163 [2024-11-28 08:26:51.231909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.163 [2024-11-28 08:26:51.231924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.163 [2024-11-28 08:26:51.231931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.163 [2024-11-28 08:26:51.231937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.163 [2024-11-28 08:26:51.231955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.163 qpair failed and we were unable to recover it. 00:28:09.163 [2024-11-28 08:26:51.241830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.163 [2024-11-28 08:26:51.241885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.164 [2024-11-28 08:26:51.241901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.164 [2024-11-28 08:26:51.241908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.164 [2024-11-28 08:26:51.241913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.164 [2024-11-28 08:26:51.241928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.164 qpair failed and we were unable to recover it. 00:28:09.164 [2024-11-28 08:26:51.251888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.164 [2024-11-28 08:26:51.251953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.164 [2024-11-28 08:26:51.251968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.164 [2024-11-28 08:26:51.251974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.164 [2024-11-28 08:26:51.251980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.164 [2024-11-28 08:26:51.251995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.164 qpair failed and we were unable to recover it. 00:28:09.164 [2024-11-28 08:26:51.261897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.164 [2024-11-28 08:26:51.261962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.164 [2024-11-28 08:26:51.261977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.164 [2024-11-28 08:26:51.261984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.164 [2024-11-28 08:26:51.261989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.164 [2024-11-28 08:26:51.262004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.164 qpair failed and we were unable to recover it. 00:28:09.164 [2024-11-28 08:26:51.271956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.164 [2024-11-28 08:26:51.272020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.164 [2024-11-28 08:26:51.272035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.164 [2024-11-28 08:26:51.272041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.164 [2024-11-28 08:26:51.272048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.164 [2024-11-28 08:26:51.272062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.164 qpair failed and we were unable to recover it. 00:28:09.164 [2024-11-28 08:26:51.281941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.164 [2024-11-28 08:26:51.282000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.164 [2024-11-28 08:26:51.282014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.164 [2024-11-28 08:26:51.282021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.164 [2024-11-28 08:26:51.282027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.164 [2024-11-28 08:26:51.282041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.164 qpair failed and we were unable to recover it. 00:28:09.164 [2024-11-28 08:26:51.291989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.164 [2024-11-28 08:26:51.292079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.164 [2024-11-28 08:26:51.292094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.164 [2024-11-28 08:26:51.292101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.164 [2024-11-28 08:26:51.292107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.164 [2024-11-28 08:26:51.292122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.164 qpair failed and we were unable to recover it. 00:28:09.164 [2024-11-28 08:26:51.302004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.164 [2024-11-28 08:26:51.302065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.164 [2024-11-28 08:26:51.302079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.164 [2024-11-28 08:26:51.302086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.164 [2024-11-28 08:26:51.302092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.164 [2024-11-28 08:26:51.302106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.164 qpair failed and we were unable to recover it. 00:28:09.164 [2024-11-28 08:26:51.312039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.164 [2024-11-28 08:26:51.312096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.164 [2024-11-28 08:26:51.312110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.164 [2024-11-28 08:26:51.312123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.164 [2024-11-28 08:26:51.312129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.164 [2024-11-28 08:26:51.312143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.164 qpair failed and we were unable to recover it. 00:28:09.164 [2024-11-28 08:26:51.322050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.164 [2024-11-28 08:26:51.322106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.164 [2024-11-28 08:26:51.322121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.164 [2024-11-28 08:26:51.322128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.164 [2024-11-28 08:26:51.322134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.164 [2024-11-28 08:26:51.322149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.164 qpair failed and we were unable to recover it. 00:28:09.164 [2024-11-28 08:26:51.332086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.164 [2024-11-28 08:26:51.332144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.164 [2024-11-28 08:26:51.332159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.164 [2024-11-28 08:26:51.332166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.164 [2024-11-28 08:26:51.332172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.164 [2024-11-28 08:26:51.332187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.164 qpair failed and we were unable to recover it. 00:28:09.164 [2024-11-28 08:26:51.342190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.164 [2024-11-28 08:26:51.342278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.164 [2024-11-28 08:26:51.342293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.164 [2024-11-28 08:26:51.342299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.165 [2024-11-28 08:26:51.342306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.165 [2024-11-28 08:26:51.342320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.165 qpair failed and we were unable to recover it. 00:28:09.165 [2024-11-28 08:26:51.352150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.165 [2024-11-28 08:26:51.352251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.165 [2024-11-28 08:26:51.352267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.165 [2024-11-28 08:26:51.352274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.165 [2024-11-28 08:26:51.352280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.165 [2024-11-28 08:26:51.352298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.165 qpair failed and we were unable to recover it. 00:28:09.165 [2024-11-28 08:26:51.362225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.165 [2024-11-28 08:26:51.362287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.165 [2024-11-28 08:26:51.362301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.165 [2024-11-28 08:26:51.362308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.165 [2024-11-28 08:26:51.362315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.165 [2024-11-28 08:26:51.362329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.165 qpair failed and we were unable to recover it. 00:28:09.165 [2024-11-28 08:26:51.372213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.165 [2024-11-28 08:26:51.372270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.165 [2024-11-28 08:26:51.372286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.165 [2024-11-28 08:26:51.372293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.165 [2024-11-28 08:26:51.372300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.165 [2024-11-28 08:26:51.372315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.165 qpair failed and we were unable to recover it. 00:28:09.165 [2024-11-28 08:26:51.382234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.165 [2024-11-28 08:26:51.382287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.165 [2024-11-28 08:26:51.382301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.165 [2024-11-28 08:26:51.382308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.165 [2024-11-28 08:26:51.382314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.165 [2024-11-28 08:26:51.382328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.165 qpair failed and we were unable to recover it. 00:28:09.165 [2024-11-28 08:26:51.392240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.165 [2024-11-28 08:26:51.392334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.165 [2024-11-28 08:26:51.392349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.165 [2024-11-28 08:26:51.392356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.165 [2024-11-28 08:26:51.392362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.165 [2024-11-28 08:26:51.392376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.165 qpair failed and we were unable to recover it. 00:28:09.165 [2024-11-28 08:26:51.402282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.165 [2024-11-28 08:26:51.402345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.165 [2024-11-28 08:26:51.402360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.165 [2024-11-28 08:26:51.402367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.165 [2024-11-28 08:26:51.402373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.165 [2024-11-28 08:26:51.402388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.165 qpair failed and we were unable to recover it. 00:28:09.165 [2024-11-28 08:26:51.412339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.165 [2024-11-28 08:26:51.412397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.165 [2024-11-28 08:26:51.412412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.165 [2024-11-28 08:26:51.412419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.165 [2024-11-28 08:26:51.412424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.165 [2024-11-28 08:26:51.412439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.165 qpair failed and we were unable to recover it. 00:28:09.165 [2024-11-28 08:26:51.422358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.165 [2024-11-28 08:26:51.422413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.165 [2024-11-28 08:26:51.422427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.165 [2024-11-28 08:26:51.422434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.165 [2024-11-28 08:26:51.422440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.165 [2024-11-28 08:26:51.422454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.165 qpair failed and we were unable to recover it. 00:28:09.427 [2024-11-28 08:26:51.432419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.427 [2024-11-28 08:26:51.432470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.427 [2024-11-28 08:26:51.432485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.427 [2024-11-28 08:26:51.432492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.427 [2024-11-28 08:26:51.432498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.427 [2024-11-28 08:26:51.432512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.427 qpair failed and we were unable to recover it. 00:28:09.427 [2024-11-28 08:26:51.442401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.427 [2024-11-28 08:26:51.442454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.427 [2024-11-28 08:26:51.442468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.427 [2024-11-28 08:26:51.442478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.427 [2024-11-28 08:26:51.442484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.427 [2024-11-28 08:26:51.442498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.427 qpair failed and we were unable to recover it. 00:28:09.427 [2024-11-28 08:26:51.452459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.427 [2024-11-28 08:26:51.452518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.427 [2024-11-28 08:26:51.452531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.427 [2024-11-28 08:26:51.452539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.427 [2024-11-28 08:26:51.452546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.427 [2024-11-28 08:26:51.452561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.427 qpair failed and we were unable to recover it. 00:28:09.427 [2024-11-28 08:26:51.462455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.427 [2024-11-28 08:26:51.462511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.427 [2024-11-28 08:26:51.462525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.427 [2024-11-28 08:26:51.462532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.427 [2024-11-28 08:26:51.462538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.427 [2024-11-28 08:26:51.462553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.427 qpair failed and we were unable to recover it. 00:28:09.427 [2024-11-28 08:26:51.472486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.427 [2024-11-28 08:26:51.472543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.427 [2024-11-28 08:26:51.472557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.427 [2024-11-28 08:26:51.472563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.427 [2024-11-28 08:26:51.472569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.427 [2024-11-28 08:26:51.472583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.428 qpair failed and we were unable to recover it. 00:28:09.428 [2024-11-28 08:26:51.482503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.428 [2024-11-28 08:26:51.482561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.428 [2024-11-28 08:26:51.482575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.428 [2024-11-28 08:26:51.482582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.428 [2024-11-28 08:26:51.482588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.428 [2024-11-28 08:26:51.482606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.428 qpair failed and we were unable to recover it. 00:28:09.428 [2024-11-28 08:26:51.492549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.428 [2024-11-28 08:26:51.492608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.428 [2024-11-28 08:26:51.492623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.428 [2024-11-28 08:26:51.492630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.428 [2024-11-28 08:26:51.492636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.428 [2024-11-28 08:26:51.492651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.428 qpair failed and we were unable to recover it. 00:28:09.428 [2024-11-28 08:26:51.502573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.428 [2024-11-28 08:26:51.502635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.428 [2024-11-28 08:26:51.502649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.428 [2024-11-28 08:26:51.502656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.428 [2024-11-28 08:26:51.502662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.428 [2024-11-28 08:26:51.502676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.428 qpair failed and we were unable to recover it. 00:28:09.428 [2024-11-28 08:26:51.512631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.428 [2024-11-28 08:26:51.512700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.428 [2024-11-28 08:26:51.512715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.428 [2024-11-28 08:26:51.512722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.428 [2024-11-28 08:26:51.512728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.428 [2024-11-28 08:26:51.512742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.428 qpair failed and we were unable to recover it. 00:28:09.428 [2024-11-28 08:26:51.522587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.428 [2024-11-28 08:26:51.522644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.428 [2024-11-28 08:26:51.522659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.428 [2024-11-28 08:26:51.522665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.428 [2024-11-28 08:26:51.522671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.428 [2024-11-28 08:26:51.522685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.428 qpair failed and we were unable to recover it. 00:28:09.428 [2024-11-28 08:26:51.532609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.428 [2024-11-28 08:26:51.532671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.428 [2024-11-28 08:26:51.532687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.428 [2024-11-28 08:26:51.532694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.428 [2024-11-28 08:26:51.532700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.428 [2024-11-28 08:26:51.532715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.428 qpair failed and we were unable to recover it. 00:28:09.428 [2024-11-28 08:26:51.542694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.428 [2024-11-28 08:26:51.542753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.428 [2024-11-28 08:26:51.542768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.428 [2024-11-28 08:26:51.542774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.428 [2024-11-28 08:26:51.542780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.428 [2024-11-28 08:26:51.542795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.428 qpair failed and we were unable to recover it. 00:28:09.428 [2024-11-28 08:26:51.552717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.428 [2024-11-28 08:26:51.552776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.428 [2024-11-28 08:26:51.552790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.428 [2024-11-28 08:26:51.552797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.428 [2024-11-28 08:26:51.552803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.428 [2024-11-28 08:26:51.552818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.428 qpair failed and we were unable to recover it. 00:28:09.428 [2024-11-28 08:26:51.562748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.428 [2024-11-28 08:26:51.562804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.428 [2024-11-28 08:26:51.562818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.428 [2024-11-28 08:26:51.562824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.428 [2024-11-28 08:26:51.562830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.428 [2024-11-28 08:26:51.562845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.428 qpair failed and we were unable to recover it. 00:28:09.428 [2024-11-28 08:26:51.572787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.428 [2024-11-28 08:26:51.572843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.428 [2024-11-28 08:26:51.572859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.428 [2024-11-28 08:26:51.572870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.428 [2024-11-28 08:26:51.572876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.428 [2024-11-28 08:26:51.572892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.428 qpair failed and we were unable to recover it. 00:28:09.428 [2024-11-28 08:26:51.582784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.428 [2024-11-28 08:26:51.582839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.428 [2024-11-28 08:26:51.582853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.428 [2024-11-28 08:26:51.582860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.429 [2024-11-28 08:26:51.582866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.429 [2024-11-28 08:26:51.582880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.429 qpair failed and we were unable to recover it. 00:28:09.429 [2024-11-28 08:26:51.592756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.429 [2024-11-28 08:26:51.592814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.429 [2024-11-28 08:26:51.592828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.429 [2024-11-28 08:26:51.592835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.429 [2024-11-28 08:26:51.592841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.429 [2024-11-28 08:26:51.592856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.429 qpair failed and we were unable to recover it. 00:28:09.429 [2024-11-28 08:26:51.602856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.429 [2024-11-28 08:26:51.602915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.429 [2024-11-28 08:26:51.602930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.429 [2024-11-28 08:26:51.602937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.429 [2024-11-28 08:26:51.602943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.429 [2024-11-28 08:26:51.602962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.429 qpair failed and we were unable to recover it. 00:28:09.429 [2024-11-28 08:26:51.612886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.429 [2024-11-28 08:26:51.612944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.429 [2024-11-28 08:26:51.612964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.429 [2024-11-28 08:26:51.612970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.429 [2024-11-28 08:26:51.612976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.429 [2024-11-28 08:26:51.612995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.429 qpair failed and we were unable to recover it. 00:28:09.429 [2024-11-28 08:26:51.622916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.429 [2024-11-28 08:26:51.622978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.429 [2024-11-28 08:26:51.622993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.429 [2024-11-28 08:26:51.623000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.429 [2024-11-28 08:26:51.623006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.429 [2024-11-28 08:26:51.623022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.429 qpair failed and we were unable to recover it. 00:28:09.429 [2024-11-28 08:26:51.632932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.429 [2024-11-28 08:26:51.632993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.429 [2024-11-28 08:26:51.633008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.429 [2024-11-28 08:26:51.633015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.429 [2024-11-28 08:26:51.633021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.429 [2024-11-28 08:26:51.633036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.429 qpair failed and we were unable to recover it. 00:28:09.429 [2024-11-28 08:26:51.642970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.429 [2024-11-28 08:26:51.643023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.429 [2024-11-28 08:26:51.643037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.429 [2024-11-28 08:26:51.643044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.429 [2024-11-28 08:26:51.643049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.429 [2024-11-28 08:26:51.643064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.429 qpair failed and we were unable to recover it. 00:28:09.429 [2024-11-28 08:26:51.652990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.429 [2024-11-28 08:26:51.653083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.429 [2024-11-28 08:26:51.653098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.429 [2024-11-28 08:26:51.653105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.429 [2024-11-28 08:26:51.653111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.429 [2024-11-28 08:26:51.653126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.429 qpair failed and we were unable to recover it. 00:28:09.429 [2024-11-28 08:26:51.663027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.429 [2024-11-28 08:26:51.663092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.429 [2024-11-28 08:26:51.663106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.429 [2024-11-28 08:26:51.663113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.429 [2024-11-28 08:26:51.663119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.429 [2024-11-28 08:26:51.663133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.429 qpair failed and we were unable to recover it. 00:28:09.429 [2024-11-28 08:26:51.673056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.429 [2024-11-28 08:26:51.673114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.429 [2024-11-28 08:26:51.673129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.429 [2024-11-28 08:26:51.673136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.429 [2024-11-28 08:26:51.673142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.429 [2024-11-28 08:26:51.673156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.429 qpair failed and we were unable to recover it. 00:28:09.429 [2024-11-28 08:26:51.683100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.429 [2024-11-28 08:26:51.683168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.429 [2024-11-28 08:26:51.683182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.429 [2024-11-28 08:26:51.683189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.430 [2024-11-28 08:26:51.683195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.430 [2024-11-28 08:26:51.683209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.430 qpair failed and we were unable to recover it. 00:28:09.691 [2024-11-28 08:26:51.693086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.691 [2024-11-28 08:26:51.693146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.691 [2024-11-28 08:26:51.693160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.691 [2024-11-28 08:26:51.693167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.691 [2024-11-28 08:26:51.693174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.691 [2024-11-28 08:26:51.693188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.691 qpair failed and we were unable to recover it. 00:28:09.691 [2024-11-28 08:26:51.703090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.691 [2024-11-28 08:26:51.703151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.691 [2024-11-28 08:26:51.703165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.691 [2024-11-28 08:26:51.703176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.691 [2024-11-28 08:26:51.703182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.691 [2024-11-28 08:26:51.703196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.691 qpair failed and we were unable to recover it. 00:28:09.691 [2024-11-28 08:26:51.713186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.691 [2024-11-28 08:26:51.713242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.691 [2024-11-28 08:26:51.713257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.691 [2024-11-28 08:26:51.713264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.691 [2024-11-28 08:26:51.713270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.713284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.723215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.723272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.723287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.723295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.723300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.723316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.733300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.733356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.733371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.733378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.733383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.733398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.743279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.743335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.743350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.743357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.743363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.743381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.753249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.753307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.753321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.753328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.753334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.753349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.763326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.763381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.763396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.763403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.763409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.763423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.773369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.773424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.773439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.773446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.773453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.773467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.783387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.783467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.783482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.783490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.783496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.783511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.793371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.793433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.793448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.793455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.793460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.793475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.803380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.803436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.803451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.803457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.803464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.803479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.813425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.813484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.813498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.813505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.813512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.813526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.823450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.823541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.823557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.823564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.823570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.823585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.833527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.833583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.833598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.833608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.833614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.833628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.692 [2024-11-28 08:26:51.843578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.692 [2024-11-28 08:26:51.843650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.692 [2024-11-28 08:26:51.843665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.692 [2024-11-28 08:26:51.843672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.692 [2024-11-28 08:26:51.843678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.692 [2024-11-28 08:26:51.843692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.692 qpair failed and we were unable to recover it. 00:28:09.693 [2024-11-28 08:26:51.853638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.693 [2024-11-28 08:26:51.853698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.693 [2024-11-28 08:26:51.853712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.693 [2024-11-28 08:26:51.853719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.693 [2024-11-28 08:26:51.853725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.693 [2024-11-28 08:26:51.853739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.693 qpair failed and we were unable to recover it. 00:28:09.693 [2024-11-28 08:26:51.863637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.693 [2024-11-28 08:26:51.863692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.693 [2024-11-28 08:26:51.863706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.693 [2024-11-28 08:26:51.863713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.693 [2024-11-28 08:26:51.863719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.693 [2024-11-28 08:26:51.863733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.693 qpair failed and we were unable to recover it. 00:28:09.693 [2024-11-28 08:26:51.873643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.693 [2024-11-28 08:26:51.873694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.693 [2024-11-28 08:26:51.873708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.693 [2024-11-28 08:26:51.873715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.693 [2024-11-28 08:26:51.873721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.693 [2024-11-28 08:26:51.873739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.693 qpair failed and we were unable to recover it. 00:28:09.693 [2024-11-28 08:26:51.883600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.693 [2024-11-28 08:26:51.883659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.693 [2024-11-28 08:26:51.883673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.693 [2024-11-28 08:26:51.883680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.693 [2024-11-28 08:26:51.883686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.693 [2024-11-28 08:26:51.883700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.693 qpair failed and we were unable to recover it. 00:28:09.693 [2024-11-28 08:26:51.893762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.693 [2024-11-28 08:26:51.893819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.693 [2024-11-28 08:26:51.893834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.693 [2024-11-28 08:26:51.893841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.693 [2024-11-28 08:26:51.893847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.693 [2024-11-28 08:26:51.893861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.693 qpair failed and we were unable to recover it. 00:28:09.693 [2024-11-28 08:26:51.903737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.693 [2024-11-28 08:26:51.903794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.693 [2024-11-28 08:26:51.903810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.693 [2024-11-28 08:26:51.903817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.693 [2024-11-28 08:26:51.903824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.693 [2024-11-28 08:26:51.903839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.693 qpair failed and we were unable to recover it. 00:28:09.693 [2024-11-28 08:26:51.913790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.693 [2024-11-28 08:26:51.913845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.693 [2024-11-28 08:26:51.913860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.693 [2024-11-28 08:26:51.913867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.693 [2024-11-28 08:26:51.913873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.693 [2024-11-28 08:26:51.913887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.693 qpair failed and we were unable to recover it. 00:28:09.693 [2024-11-28 08:26:51.923800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.693 [2024-11-28 08:26:51.923864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.693 [2024-11-28 08:26:51.923880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.693 [2024-11-28 08:26:51.923887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.693 [2024-11-28 08:26:51.923893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.693 [2024-11-28 08:26:51.923908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.693 qpair failed and we were unable to recover it. 00:28:09.693 [2024-11-28 08:26:51.933890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.693 [2024-11-28 08:26:51.933995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.693 [2024-11-28 08:26:51.934011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.693 [2024-11-28 08:26:51.934018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.693 [2024-11-28 08:26:51.934025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.693 [2024-11-28 08:26:51.934040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.693 qpair failed and we were unable to recover it. 00:28:09.693 [2024-11-28 08:26:51.943918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.693 [2024-11-28 08:26:51.944030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.693 [2024-11-28 08:26:51.944045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.693 [2024-11-28 08:26:51.944052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.693 [2024-11-28 08:26:51.944058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.693 [2024-11-28 08:26:51.944073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.693 qpair failed and we were unable to recover it. 00:28:09.693 [2024-11-28 08:26:51.953887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.693 [2024-11-28 08:26:51.953940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.693 [2024-11-28 08:26:51.953958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.693 [2024-11-28 08:26:51.953965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.693 [2024-11-28 08:26:51.953971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.693 [2024-11-28 08:26:51.953986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.693 qpair failed and we were unable to recover it. 00:28:09.955 [2024-11-28 08:26:51.963902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.955 [2024-11-28 08:26:51.963963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.955 [2024-11-28 08:26:51.963978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.955 [2024-11-28 08:26:51.963988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.955 [2024-11-28 08:26:51.963994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.955 [2024-11-28 08:26:51.964009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.955 qpair failed and we were unable to recover it. 00:28:09.955 [2024-11-28 08:26:51.973984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.955 [2024-11-28 08:26:51.974082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.955 [2024-11-28 08:26:51.974098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.955 [2024-11-28 08:26:51.974105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.955 [2024-11-28 08:26:51.974111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.955 [2024-11-28 08:26:51.974125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.955 qpair failed and we were unable to recover it. 00:28:09.955 [2024-11-28 08:26:51.983908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.955 [2024-11-28 08:26:51.983989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.955 [2024-11-28 08:26:51.984004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.955 [2024-11-28 08:26:51.984011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.955 [2024-11-28 08:26:51.984018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.955 [2024-11-28 08:26:51.984033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.955 qpair failed and we were unable to recover it. 00:28:09.955 [2024-11-28 08:26:51.993999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.955 [2024-11-28 08:26:51.994051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.955 [2024-11-28 08:26:51.994066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.955 [2024-11-28 08:26:51.994073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.955 [2024-11-28 08:26:51.994079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.955 [2024-11-28 08:26:51.994094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.955 qpair failed and we were unable to recover it. 00:28:09.955 [2024-11-28 08:26:52.004023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.955 [2024-11-28 08:26:52.004076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.955 [2024-11-28 08:26:52.004090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.955 [2024-11-28 08:26:52.004098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.955 [2024-11-28 08:26:52.004104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.955 [2024-11-28 08:26:52.004123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.955 qpair failed and we were unable to recover it. 00:28:09.955 [2024-11-28 08:26:52.014068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.955 [2024-11-28 08:26:52.014128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.955 [2024-11-28 08:26:52.014142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.955 [2024-11-28 08:26:52.014148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.955 [2024-11-28 08:26:52.014154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.955 [2024-11-28 08:26:52.014168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.955 qpair failed and we were unable to recover it. 00:28:09.955 [2024-11-28 08:26:52.024134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.955 [2024-11-28 08:26:52.024194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.955 [2024-11-28 08:26:52.024210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.955 [2024-11-28 08:26:52.024217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.955 [2024-11-28 08:26:52.024224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.955 [2024-11-28 08:26:52.024239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.955 qpair failed and we were unable to recover it. 00:28:09.955 [2024-11-28 08:26:52.034045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.955 [2024-11-28 08:26:52.034102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.955 [2024-11-28 08:26:52.034117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.955 [2024-11-28 08:26:52.034124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.955 [2024-11-28 08:26:52.034129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.955 [2024-11-28 08:26:52.034144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.955 qpair failed and we were unable to recover it. 00:28:09.955 [2024-11-28 08:26:52.044140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.955 [2024-11-28 08:26:52.044196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.955 [2024-11-28 08:26:52.044211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.955 [2024-11-28 08:26:52.044218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.955 [2024-11-28 08:26:52.044224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.955 [2024-11-28 08:26:52.044239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.955 qpair failed and we were unable to recover it. 00:28:09.955 [2024-11-28 08:26:52.054179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.955 [2024-11-28 08:26:52.054242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.955 [2024-11-28 08:26:52.054259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.955 [2024-11-28 08:26:52.054266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.955 [2024-11-28 08:26:52.054272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.955 [2024-11-28 08:26:52.054288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.955 qpair failed and we were unable to recover it. 00:28:09.955 [2024-11-28 08:26:52.064173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.955 [2024-11-28 08:26:52.064236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.955 [2024-11-28 08:26:52.064251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.955 [2024-11-28 08:26:52.064258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.955 [2024-11-28 08:26:52.064265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.955 [2024-11-28 08:26:52.064279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.074249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.074309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.074324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.074331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.074338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.074352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.084243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.084294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.084309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.084315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.084321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.084336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.094279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.094340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.094354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.094365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.094371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.094386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.104317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.104378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.104393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.104399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.104406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.104421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.114257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.114308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.114324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.114331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.114338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.114353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.124311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.124366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.124381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.124388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.124394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.124408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.134422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.134525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.134541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.134548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.134554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.134573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.144433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.144490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.144505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.144512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.144518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.144533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.154477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.154534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.154548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.154556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.154561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.154576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.164483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.164536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.164551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.164558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.164564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.164578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.174511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.174565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.174580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.174586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.174592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.174606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.184532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.184597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.184611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.184618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.184624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.184639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.194599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.956 [2024-11-28 08:26:52.194661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.956 [2024-11-28 08:26:52.194676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.956 [2024-11-28 08:26:52.194683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.956 [2024-11-28 08:26:52.194690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.956 [2024-11-28 08:26:52.194704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.956 qpair failed and we were unable to recover it. 00:28:09.956 [2024-11-28 08:26:52.204585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.957 [2024-11-28 08:26:52.204640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.957 [2024-11-28 08:26:52.204654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.957 [2024-11-28 08:26:52.204661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.957 [2024-11-28 08:26:52.204667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.957 [2024-11-28 08:26:52.204682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.957 qpair failed and we were unable to recover it. 00:28:09.957 [2024-11-28 08:26:52.214626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.957 [2024-11-28 08:26:52.214682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.957 [2024-11-28 08:26:52.214696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.957 [2024-11-28 08:26:52.214703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.957 [2024-11-28 08:26:52.214709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:09.957 [2024-11-28 08:26:52.214723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:09.957 qpair failed and we were unable to recover it. 00:28:10.218 [2024-11-28 08:26:52.224655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.218 [2024-11-28 08:26:52.224714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.218 [2024-11-28 08:26:52.224729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.218 [2024-11-28 08:26:52.224739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.218 [2024-11-28 08:26:52.224745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.218 [2024-11-28 08:26:52.224760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.218 qpair failed and we were unable to recover it. 00:28:10.218 [2024-11-28 08:26:52.234665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.218 [2024-11-28 08:26:52.234717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.218 [2024-11-28 08:26:52.234731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.218 [2024-11-28 08:26:52.234738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.218 [2024-11-28 08:26:52.234744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.218 [2024-11-28 08:26:52.234758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.218 qpair failed and we were unable to recover it. 00:28:10.218 [2024-11-28 08:26:52.244689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.218 [2024-11-28 08:26:52.244747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.218 [2024-11-28 08:26:52.244762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.218 [2024-11-28 08:26:52.244768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.218 [2024-11-28 08:26:52.244774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.218 [2024-11-28 08:26:52.244788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.218 qpair failed and we were unable to recover it. 00:28:10.218 [2024-11-28 08:26:52.254732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.218 [2024-11-28 08:26:52.254790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.218 [2024-11-28 08:26:52.254803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.218 [2024-11-28 08:26:52.254810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.218 [2024-11-28 08:26:52.254816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.218 [2024-11-28 08:26:52.254830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.218 qpair failed and we were unable to recover it. 00:28:10.218 [2024-11-28 08:26:52.264761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.218 [2024-11-28 08:26:52.264816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.218 [2024-11-28 08:26:52.264831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.218 [2024-11-28 08:26:52.264838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.218 [2024-11-28 08:26:52.264844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.218 [2024-11-28 08:26:52.264864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.218 qpair failed and we were unable to recover it. 00:28:10.218 [2024-11-28 08:26:52.274777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.218 [2024-11-28 08:26:52.274831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.218 [2024-11-28 08:26:52.274845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.218 [2024-11-28 08:26:52.274852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.218 [2024-11-28 08:26:52.274858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.218 [2024-11-28 08:26:52.274872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.218 qpair failed and we were unable to recover it. 00:28:10.218 [2024-11-28 08:26:52.284773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.218 [2024-11-28 08:26:52.284836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.218 [2024-11-28 08:26:52.284850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.218 [2024-11-28 08:26:52.284857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.218 [2024-11-28 08:26:52.284863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.284879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.294842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.294900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.294914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.294921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.294927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.294941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.304868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.304936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.304953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.304961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.304967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.304982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.314936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.314999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.315014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.315021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.315026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.315041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.324906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.324967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.324982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.324988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.324995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.325009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.334963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.335022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.335036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.335043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.335049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.335063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.344913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.344968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.344982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.344989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.344994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.345008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.355007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.355063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.355077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.355087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.355093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.355107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.365070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.365131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.365146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.365153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.365159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.365174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.375065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.375127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.375142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.375150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.375156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.375172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.385101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.385168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.385182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.385190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.385196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.385211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.395120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.395176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.395191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.395198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.395204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.395221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.405209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.405315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.405331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.405338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.405344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.219 [2024-11-28 08:26:52.405359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.219 qpair failed and we were unable to recover it. 00:28:10.219 [2024-11-28 08:26:52.415179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.219 [2024-11-28 08:26:52.415239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.219 [2024-11-28 08:26:52.415253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.219 [2024-11-28 08:26:52.415260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.219 [2024-11-28 08:26:52.415266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.220 [2024-11-28 08:26:52.415281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.220 qpair failed and we were unable to recover it. 00:28:10.220 [2024-11-28 08:26:52.425202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.220 [2024-11-28 08:26:52.425261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.220 [2024-11-28 08:26:52.425276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.220 [2024-11-28 08:26:52.425283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.220 [2024-11-28 08:26:52.425289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.220 [2024-11-28 08:26:52.425304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.220 qpair failed and we were unable to recover it. 00:28:10.220 [2024-11-28 08:26:52.435228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.220 [2024-11-28 08:26:52.435300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.220 [2024-11-28 08:26:52.435318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.220 [2024-11-28 08:26:52.435325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.220 [2024-11-28 08:26:52.435331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.220 [2024-11-28 08:26:52.435346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.220 qpair failed and we were unable to recover it. 00:28:10.220 [2024-11-28 08:26:52.445257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.220 [2024-11-28 08:26:52.445314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.220 [2024-11-28 08:26:52.445328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.220 [2024-11-28 08:26:52.445335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.220 [2024-11-28 08:26:52.445341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.220 [2024-11-28 08:26:52.445355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.220 qpair failed and we were unable to recover it. 00:28:10.220 [2024-11-28 08:26:52.455340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.220 [2024-11-28 08:26:52.455445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.220 [2024-11-28 08:26:52.455460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.220 [2024-11-28 08:26:52.455467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.220 [2024-11-28 08:26:52.455473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.220 [2024-11-28 08:26:52.455488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.220 qpair failed and we were unable to recover it. 00:28:10.220 [2024-11-28 08:26:52.465290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.220 [2024-11-28 08:26:52.465361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.220 [2024-11-28 08:26:52.465375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.220 [2024-11-28 08:26:52.465382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.220 [2024-11-28 08:26:52.465388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.220 [2024-11-28 08:26:52.465404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.220 qpair failed and we were unable to recover it. 00:28:10.220 [2024-11-28 08:26:52.475338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.220 [2024-11-28 08:26:52.475398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.220 [2024-11-28 08:26:52.475411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.220 [2024-11-28 08:26:52.475418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.220 [2024-11-28 08:26:52.475424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.220 [2024-11-28 08:26:52.475439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.220 qpair failed and we were unable to recover it. 00:28:10.482 [2024-11-28 08:26:52.485362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.482 [2024-11-28 08:26:52.485418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.482 [2024-11-28 08:26:52.485432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.482 [2024-11-28 08:26:52.485442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.482 [2024-11-28 08:26:52.485448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.482 [2024-11-28 08:26:52.485463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.482 qpair failed and we were unable to recover it. 00:28:10.482 [2024-11-28 08:26:52.495390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.482 [2024-11-28 08:26:52.495446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.482 [2024-11-28 08:26:52.495459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.482 [2024-11-28 08:26:52.495466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.482 [2024-11-28 08:26:52.495472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.482 [2024-11-28 08:26:52.495485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.482 qpair failed and we were unable to recover it. 00:28:10.482 [2024-11-28 08:26:52.505435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.482 [2024-11-28 08:26:52.505495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.482 [2024-11-28 08:26:52.505509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.482 [2024-11-28 08:26:52.505515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.482 [2024-11-28 08:26:52.505521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.482 [2024-11-28 08:26:52.505536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.482 qpair failed and we were unable to recover it. 00:28:10.482 [2024-11-28 08:26:52.515480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.482 [2024-11-28 08:26:52.515542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.482 [2024-11-28 08:26:52.515557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.482 [2024-11-28 08:26:52.515564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.482 [2024-11-28 08:26:52.515570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.482 [2024-11-28 08:26:52.515584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.482 qpair failed and we were unable to recover it. 00:28:10.482 [2024-11-28 08:26:52.525498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.482 [2024-11-28 08:26:52.525547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.482 [2024-11-28 08:26:52.525562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.482 [2024-11-28 08:26:52.525570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.482 [2024-11-28 08:26:52.525575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.482 [2024-11-28 08:26:52.525593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.482 qpair failed and we were unable to recover it. 00:28:10.482 [2024-11-28 08:26:52.535530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.482 [2024-11-28 08:26:52.535607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.482 [2024-11-28 08:26:52.535622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.482 [2024-11-28 08:26:52.535629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.482 [2024-11-28 08:26:52.535636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.482 [2024-11-28 08:26:52.535651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.482 qpair failed and we were unable to recover it. 00:28:10.482 [2024-11-28 08:26:52.545555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.482 [2024-11-28 08:26:52.545607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.482 [2024-11-28 08:26:52.545621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.482 [2024-11-28 08:26:52.545628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.482 [2024-11-28 08:26:52.545634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.482 [2024-11-28 08:26:52.545648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.482 qpair failed and we were unable to recover it. 00:28:10.482 [2024-11-28 08:26:52.555613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.555680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.555695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.555702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.555709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.555724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.565632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.565689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.565702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.565709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.565715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.565730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.575648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.575712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.575727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.575734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.575740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.575755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.585667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.585725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.585740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.585746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.585752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.585766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.595685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.595740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.595755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.595761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.595767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.595781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.605710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.605778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.605793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.605800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.605806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.605821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.615753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.615813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.615827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.615837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.615843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.615858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.625774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.625829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.625844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.625851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.625858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.625873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.635803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.635858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.635873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.635880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.635886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.635900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.645844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.645903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.645917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.645924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.645929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.645944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.655869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.655970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.655985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.655992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.655999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.656017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.665940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.666004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.666018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.666025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.666031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.666046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.675922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.483 [2024-11-28 08:26:52.675981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.483 [2024-11-28 08:26:52.675995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.483 [2024-11-28 08:26:52.676002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.483 [2024-11-28 08:26:52.676009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.483 [2024-11-28 08:26:52.676023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.483 qpair failed and we were unable to recover it. 00:28:10.483 [2024-11-28 08:26:52.685954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.484 [2024-11-28 08:26:52.686061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.484 [2024-11-28 08:26:52.686076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.484 [2024-11-28 08:26:52.686083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.484 [2024-11-28 08:26:52.686089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.484 [2024-11-28 08:26:52.686104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.484 qpair failed and we were unable to recover it. 00:28:10.484 [2024-11-28 08:26:52.696006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.484 [2024-11-28 08:26:52.696062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.484 [2024-11-28 08:26:52.696076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.484 [2024-11-28 08:26:52.696083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.484 [2024-11-28 08:26:52.696089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.484 [2024-11-28 08:26:52.696104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.484 qpair failed and we were unable to recover it. 00:28:10.484 [2024-11-28 08:26:52.706012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.484 [2024-11-28 08:26:52.706079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.484 [2024-11-28 08:26:52.706093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.484 [2024-11-28 08:26:52.706100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.484 [2024-11-28 08:26:52.706106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.484 [2024-11-28 08:26:52.706121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.484 qpair failed and we were unable to recover it. 00:28:10.484 [2024-11-28 08:26:52.716072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.484 [2024-11-28 08:26:52.716134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.484 [2024-11-28 08:26:52.716148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.484 [2024-11-28 08:26:52.716155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.484 [2024-11-28 08:26:52.716161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.484 [2024-11-28 08:26:52.716176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.484 qpair failed and we were unable to recover it. 00:28:10.484 [2024-11-28 08:26:52.726052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.484 [2024-11-28 08:26:52.726113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.484 [2024-11-28 08:26:52.726129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.484 [2024-11-28 08:26:52.726136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.484 [2024-11-28 08:26:52.726142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.484 [2024-11-28 08:26:52.726157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.484 qpair failed and we were unable to recover it. 00:28:10.484 [2024-11-28 08:26:52.736109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.484 [2024-11-28 08:26:52.736183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.484 [2024-11-28 08:26:52.736200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.484 [2024-11-28 08:26:52.736208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.484 [2024-11-28 08:26:52.736214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.484 [2024-11-28 08:26:52.736228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.484 qpair failed and we were unable to recover it. 00:28:10.484 [2024-11-28 08:26:52.746152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.484 [2024-11-28 08:26:52.746213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.484 [2024-11-28 08:26:52.746227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.484 [2024-11-28 08:26:52.746236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.484 [2024-11-28 08:26:52.746242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.484 [2024-11-28 08:26:52.746256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.484 qpair failed and we were unable to recover it. 00:28:10.745 [2024-11-28 08:26:52.756198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.745 [2024-11-28 08:26:52.756275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.745 [2024-11-28 08:26:52.756290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.745 [2024-11-28 08:26:52.756297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.745 [2024-11-28 08:26:52.756303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.745 [2024-11-28 08:26:52.756318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.745 qpair failed and we were unable to recover it. 00:28:10.745 [2024-11-28 08:26:52.766215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.745 [2024-11-28 08:26:52.766275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.745 [2024-11-28 08:26:52.766290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.745 [2024-11-28 08:26:52.766296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.745 [2024-11-28 08:26:52.766303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.745 [2024-11-28 08:26:52.766317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.745 qpair failed and we were unable to recover it. 00:28:10.745 [2024-11-28 08:26:52.776200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.745 [2024-11-28 08:26:52.776258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.745 [2024-11-28 08:26:52.776273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.745 [2024-11-28 08:26:52.776280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.745 [2024-11-28 08:26:52.776286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.745 [2024-11-28 08:26:52.776301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.745 qpair failed and we were unable to recover it. 00:28:10.745 [2024-11-28 08:26:52.786306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.745 [2024-11-28 08:26:52.786383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.745 [2024-11-28 08:26:52.786397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.745 [2024-11-28 08:26:52.786403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.745 [2024-11-28 08:26:52.786409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.745 [2024-11-28 08:26:52.786428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.745 qpair failed and we were unable to recover it. 00:28:10.745 [2024-11-28 08:26:52.796281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.745 [2024-11-28 08:26:52.796343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.745 [2024-11-28 08:26:52.796358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.745 [2024-11-28 08:26:52.796365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.745 [2024-11-28 08:26:52.796371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.745 [2024-11-28 08:26:52.796385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.745 qpair failed and we were unable to recover it. 00:28:10.745 [2024-11-28 08:26:52.806294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.745 [2024-11-28 08:26:52.806353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.745 [2024-11-28 08:26:52.806367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.806374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.806380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.806395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.816299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.816405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.816420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.816427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.816433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.816447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.826334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.826392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.826407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.826414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.826420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.826434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.836428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.836510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.836525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.836532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.836538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.836552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.846425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.846480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.846495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.846502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.846508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.846523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.856441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.856542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.856557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.856564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.856570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.856585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.866473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.866533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.866547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.866554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.866560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.866575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.876433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.876487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.876502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.876512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.876518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.876532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.886601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.886691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.886706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.886713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.886719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.886734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.896572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.896636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.896649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.896656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.896662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.896676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.906591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.906649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.906664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.906672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.906678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.906693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.916617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.916690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.916705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.916712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.916718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.916735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.926631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.926690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.926704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.746 [2024-11-28 08:26:52.926711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.746 [2024-11-28 08:26:52.926717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.746 [2024-11-28 08:26:52.926732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.746 qpair failed and we were unable to recover it. 00:28:10.746 [2024-11-28 08:26:52.936721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.746 [2024-11-28 08:26:52.936781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.746 [2024-11-28 08:26:52.936795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.747 [2024-11-28 08:26:52.936802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.747 [2024-11-28 08:26:52.936807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.747 [2024-11-28 08:26:52.936822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.747 qpair failed and we were unable to recover it. 00:28:10.747 [2024-11-28 08:26:52.946694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.747 [2024-11-28 08:26:52.946864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.747 [2024-11-28 08:26:52.946881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.747 [2024-11-28 08:26:52.946888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.747 [2024-11-28 08:26:52.946895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.747 [2024-11-28 08:26:52.946910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.747 qpair failed and we were unable to recover it. 00:28:10.747 [2024-11-28 08:26:52.956649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.747 [2024-11-28 08:26:52.956704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.747 [2024-11-28 08:26:52.956719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.747 [2024-11-28 08:26:52.956726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.747 [2024-11-28 08:26:52.956731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.747 [2024-11-28 08:26:52.956745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.747 qpair failed and we were unable to recover it. 00:28:10.747 [2024-11-28 08:26:52.966746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.747 [2024-11-28 08:26:52.966810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.747 [2024-11-28 08:26:52.966824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.747 [2024-11-28 08:26:52.966831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.747 [2024-11-28 08:26:52.966836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.747 [2024-11-28 08:26:52.966850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.747 qpair failed and we were unable to recover it. 00:28:10.747 [2024-11-28 08:26:52.976753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.747 [2024-11-28 08:26:52.976811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.747 [2024-11-28 08:26:52.976826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.747 [2024-11-28 08:26:52.976833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.747 [2024-11-28 08:26:52.976839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.747 [2024-11-28 08:26:52.976854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.747 qpair failed and we were unable to recover it. 00:28:10.747 [2024-11-28 08:26:52.986864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.747 [2024-11-28 08:26:52.986934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.747 [2024-11-28 08:26:52.986953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.747 [2024-11-28 08:26:52.986960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.747 [2024-11-28 08:26:52.986966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.747 [2024-11-28 08:26:52.986981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.747 qpair failed and we were unable to recover it. 00:28:10.747 [2024-11-28 08:26:52.996895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.747 [2024-11-28 08:26:52.996958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.747 [2024-11-28 08:26:52.996973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.747 [2024-11-28 08:26:52.996980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.747 [2024-11-28 08:26:52.996986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.747 [2024-11-28 08:26:52.997001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.747 qpair failed and we were unable to recover it. 00:28:10.747 [2024-11-28 08:26:53.006902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.747 [2024-11-28 08:26:53.006969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.747 [2024-11-28 08:26:53.006984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.747 [2024-11-28 08:26:53.006994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.747 [2024-11-28 08:26:53.007001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:10.747 [2024-11-28 08:26:53.007020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.747 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.016915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.016989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.017004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.017011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.009 [2024-11-28 08:26:53.017017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.009 [2024-11-28 08:26:53.017032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.009 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.027020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.027086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.027101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.027107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.009 [2024-11-28 08:26:53.027113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.009 [2024-11-28 08:26:53.027129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.009 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.036942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.037010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.037024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.037031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.009 [2024-11-28 08:26:53.037038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.009 [2024-11-28 08:26:53.037053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.009 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.046965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.047062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.047193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.047202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.009 [2024-11-28 08:26:53.047209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.009 [2024-11-28 08:26:53.047269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.009 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.057022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.057105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.057120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.057128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.009 [2024-11-28 08:26:53.057134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.009 [2024-11-28 08:26:53.057150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.009 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.067079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.067138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.067152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.067159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.009 [2024-11-28 08:26:53.067165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.009 [2024-11-28 08:26:53.067180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.009 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.077087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.077145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.077160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.077167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.009 [2024-11-28 08:26:53.077173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.009 [2024-11-28 08:26:53.077187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.009 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.087035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.087095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.087109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.087117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.009 [2024-11-28 08:26:53.087123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.009 [2024-11-28 08:26:53.087137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.009 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.097123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.097189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.097203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.097209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.009 [2024-11-28 08:26:53.097215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.009 [2024-11-28 08:26:53.097230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.009 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.107173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.107232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.107247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.107254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.009 [2024-11-28 08:26:53.107260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.009 [2024-11-28 08:26:53.107274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.009 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.117204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.117258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.117273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.117280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.009 [2024-11-28 08:26:53.117285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.009 [2024-11-28 08:26:53.117300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.009 qpair failed and we were unable to recover it. 00:28:11.009 [2024-11-28 08:26:53.127215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.009 [2024-11-28 08:26:53.127274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.009 [2024-11-28 08:26:53.127289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.009 [2024-11-28 08:26:53.127296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.127303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.127318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.137249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.137308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.137323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.137334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.137340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.137355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.147254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.147308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.147322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.147329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.147335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.147349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.157301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.157370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.157385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.157393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.157399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.157418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.167332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.167387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.167401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.167408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.167414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.167428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.177318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.177375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.177389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.177396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.177402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.177420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.187401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.187513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.187528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.187535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.187541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.187555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.197425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.197483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.197499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.197507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.197513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.197528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.207505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.207568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.207583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.207590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.207597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.207611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.217502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.217571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.217585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.217592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.217599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.217613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.227468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.227525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.227540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.227548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.227553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.227569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.237521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.237580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.237595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.237602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.237608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.237623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.247559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.247639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.247655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.247662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.010 [2024-11-28 08:26:53.247668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.010 [2024-11-28 08:26:53.247683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.010 qpair failed and we were unable to recover it. 00:28:11.010 [2024-11-28 08:26:53.257615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.010 [2024-11-28 08:26:53.257727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.010 [2024-11-28 08:26:53.257741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.010 [2024-11-28 08:26:53.257748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.011 [2024-11-28 08:26:53.257754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.011 [2024-11-28 08:26:53.257769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.011 qpair failed and we were unable to recover it. 00:28:11.011 [2024-11-28 08:26:53.267614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.011 [2024-11-28 08:26:53.267690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.011 [2024-11-28 08:26:53.267706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.011 [2024-11-28 08:26:53.267718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.011 [2024-11-28 08:26:53.267726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.011 [2024-11-28 08:26:53.267741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.011 qpair failed and we were unable to recover it. 00:28:11.272 [2024-11-28 08:26:53.277672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.277745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.277763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.277770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.277776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.277791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.287695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.287757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.287772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.287779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.287785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.287800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.297702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.297761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.297775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.297782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.297788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.297802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.307751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.307833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.307849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.307856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.307862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.307881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.317757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.317815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.317830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.317837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.317843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.317858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.327784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.327842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.327858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.327864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.327871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.327885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.337775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.337836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.337850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.337857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.337864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.337878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.347796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.347858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.347872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.347878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.347884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.347899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.357845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.357908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.357923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.357931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.357937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.357954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.367890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.367951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.367966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.367974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.367980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.367994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.377929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.377998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.378012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.378019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.378026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.378040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.387960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.388061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.388076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.388083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.388089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.388104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.397982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.273 [2024-11-28 08:26:53.398043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.273 [2024-11-28 08:26:53.398057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.273 [2024-11-28 08:26:53.398067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.273 [2024-11-28 08:26:53.398073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.273 [2024-11-28 08:26:53.398088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.273 qpair failed and we were unable to recover it. 00:28:11.273 [2024-11-28 08:26:53.408015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.408073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.408088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.408095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.408102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.408117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.418060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.418122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.418137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.418143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.418150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.418166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.428121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.428183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.428199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.428206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.428212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.428227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.438153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.438219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.438233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.438240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.438247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.438268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.448121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.448192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.448208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.448215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.448221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.448235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.458162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.458224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.458238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.458244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.458251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.458265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.468203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.468262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.468276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.468283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.468289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.468303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.478257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.478315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.478329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.478336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.478342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.478357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.488168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.488229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.488244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.488250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.488256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.488270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.498322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.498381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.498395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.498402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.498408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.498422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.508328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.508382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.508396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.508403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.508408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.508423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.518379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.518438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.518453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.518460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.518466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.518480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.274 [2024-11-28 08:26:53.528408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.274 [2024-11-28 08:26:53.528466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.274 [2024-11-28 08:26:53.528480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.274 [2024-11-28 08:26:53.528490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.274 [2024-11-28 08:26:53.528497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.274 [2024-11-28 08:26:53.528511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.274 qpair failed and we were unable to recover it. 00:28:11.536 [2024-11-28 08:26:53.538390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.536 [2024-11-28 08:26:53.538476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.536 [2024-11-28 08:26:53.538491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.536 [2024-11-28 08:26:53.538498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.536 [2024-11-28 08:26:53.538503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.536 [2024-11-28 08:26:53.538518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.536 qpair failed and we were unable to recover it. 00:28:11.536 [2024-11-28 08:26:53.548428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.536 [2024-11-28 08:26:53.548487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.536 [2024-11-28 08:26:53.548501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.536 [2024-11-28 08:26:53.548508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.536 [2024-11-28 08:26:53.548514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.536 [2024-11-28 08:26:53.548528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.536 qpair failed and we were unable to recover it. 00:28:11.536 [2024-11-28 08:26:53.558448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.536 [2024-11-28 08:26:53.558505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.536 [2024-11-28 08:26:53.558519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.536 [2024-11-28 08:26:53.558526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.536 [2024-11-28 08:26:53.558532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.536 [2024-11-28 08:26:53.558546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.536 qpair failed and we were unable to recover it. 00:28:11.536 [2024-11-28 08:26:53.568518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.536 [2024-11-28 08:26:53.568584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.536 [2024-11-28 08:26:53.568598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.536 [2024-11-28 08:26:53.568605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.536 [2024-11-28 08:26:53.568611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.536 [2024-11-28 08:26:53.568632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.578516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.578574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.578588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.578595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.578601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.578615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.588560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.588622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.588636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.588643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.588649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.588663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.598567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.598628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.598642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.598649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.598655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.598669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.608576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.608631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.608645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.608652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.608657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.608672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.618617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.618681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.618696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.618702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.618708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.618723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.628691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.628751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.628766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.628773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.628780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.628794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.638661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.638717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.638731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.638738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.638745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.638760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.648680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.648738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.648753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.648760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.648766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.648780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.658751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.658819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.658834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.658844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.658850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.658865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.668752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.668835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.668850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.668857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.668863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.668878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.678802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.678863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.678878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.678885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.678891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.678906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.688869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.688954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.688972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.688979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.688985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.689000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.698818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.537 [2024-11-28 08:26:53.698875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.537 [2024-11-28 08:26:53.698890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.537 [2024-11-28 08:26:53.698897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.537 [2024-11-28 08:26:53.698903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.537 [2024-11-28 08:26:53.698917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.537 qpair failed and we were unable to recover it. 00:28:11.537 [2024-11-28 08:26:53.708857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.538 [2024-11-28 08:26:53.708919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.538 [2024-11-28 08:26:53.708935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.538 [2024-11-28 08:26:53.708941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.538 [2024-11-28 08:26:53.708951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.538 [2024-11-28 08:26:53.708967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.538 qpair failed and we were unable to recover it. 00:28:11.538 [2024-11-28 08:26:53.718858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.538 [2024-11-28 08:26:53.718910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.538 [2024-11-28 08:26:53.718924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.538 [2024-11-28 08:26:53.718931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.538 [2024-11-28 08:26:53.718937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.538 [2024-11-28 08:26:53.718955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.538 qpair failed and we were unable to recover it. 00:28:11.538 [2024-11-28 08:26:53.728889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.538 [2024-11-28 08:26:53.728991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.538 [2024-11-28 08:26:53.729007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.538 [2024-11-28 08:26:53.729014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.538 [2024-11-28 08:26:53.729020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.538 [2024-11-28 08:26:53.729035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.538 qpair failed and we were unable to recover it. 00:28:11.538 [2024-11-28 08:26:53.738942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.538 [2024-11-28 08:26:53.739009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.538 [2024-11-28 08:26:53.739023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.538 [2024-11-28 08:26:53.739030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.538 [2024-11-28 08:26:53.739036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.538 [2024-11-28 08:26:53.739050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.538 qpair failed and we were unable to recover it. 00:28:11.538 [2024-11-28 08:26:53.748988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.538 [2024-11-28 08:26:53.749074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.538 [2024-11-28 08:26:53.749090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.538 [2024-11-28 08:26:53.749097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.538 [2024-11-28 08:26:53.749103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.538 [2024-11-28 08:26:53.749119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.538 qpair failed and we were unable to recover it. 00:28:11.538 [2024-11-28 08:26:53.758990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.538 [2024-11-28 08:26:53.759045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.538 [2024-11-28 08:26:53.759059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.538 [2024-11-28 08:26:53.759066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.538 [2024-11-28 08:26:53.759072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.538 [2024-11-28 08:26:53.759087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.538 qpair failed and we were unable to recover it. 00:28:11.538 [2024-11-28 08:26:53.769049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.538 [2024-11-28 08:26:53.769107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.538 [2024-11-28 08:26:53.769121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.538 [2024-11-28 08:26:53.769128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.538 [2024-11-28 08:26:53.769135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.538 [2024-11-28 08:26:53.769149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.538 qpair failed and we were unable to recover it. 00:28:11.538 [2024-11-28 08:26:53.779129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.538 [2024-11-28 08:26:53.779195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.538 [2024-11-28 08:26:53.779209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.538 [2024-11-28 08:26:53.779216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.538 [2024-11-28 08:26:53.779222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.538 [2024-11-28 08:26:53.779237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.538 qpair failed and we were unable to recover it. 00:28:11.538 [2024-11-28 08:26:53.789088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.538 [2024-11-28 08:26:53.789146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.538 [2024-11-28 08:26:53.789160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.538 [2024-11-28 08:26:53.789170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.538 [2024-11-28 08:26:53.789176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.538 [2024-11-28 08:26:53.789190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.538 qpair failed and we were unable to recover it. 00:28:11.538 [2024-11-28 08:26:53.799108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.538 [2024-11-28 08:26:53.799167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.538 [2024-11-28 08:26:53.799182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.538 [2024-11-28 08:26:53.799189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.538 [2024-11-28 08:26:53.799195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.538 [2024-11-28 08:26:53.799209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.538 qpair failed and we were unable to recover it. 00:28:11.798 [2024-11-28 08:26:53.809143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.798 [2024-11-28 08:26:53.809194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.798 [2024-11-28 08:26:53.809209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.809215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.809221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.809236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.819114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.819169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.819184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.819191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.819196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.819211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.829211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.829270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.829285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.829292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.829298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.829313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.839249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.839330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.839345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.839353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.839359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.839374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.849308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.849368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.849384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.849391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.849397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.849412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.859289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.859346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.859361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.859368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.859374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.859388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.869319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.869374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.869389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.869395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.869402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.869417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.879372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.879432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.879446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.879453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.879459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.879474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.889398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.889490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.889505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.889512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.889518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.889533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.899399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.899476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.899491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.899497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.899504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.899518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.909427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.909484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.909500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.909507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.909513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.909528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.919499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.919557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.919571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.919580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.919587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.919601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.929489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.929544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.799 [2024-11-28 08:26:53.929560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.799 [2024-11-28 08:26:53.929567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.799 [2024-11-28 08:26:53.929573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.799 [2024-11-28 08:26:53.929587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.799 qpair failed and we were unable to recover it. 00:28:11.799 [2024-11-28 08:26:53.939545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.799 [2024-11-28 08:26:53.939622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:53.939638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:53.939645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:53.939651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:53.939666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:53.949540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:53.949596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:53.949611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:53.949618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:53.949623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:53.949638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:53.959565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:53.959625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:53.959639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:53.959646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:53.959652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:53.959666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:53.969622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:53.969680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:53.969694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:53.969701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:53.969707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:53.969721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:53.979611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:53.979672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:53.979686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:53.979692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:53.979698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:53.979713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:53.989696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:53.989759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:53.989772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:53.989779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:53.989785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:53.989799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:53.999713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:53.999775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:53.999789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:53.999796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:53.999802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:53.999817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:54.009718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:54.009774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:54.009789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:54.009796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:54.009802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:54.009817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:54.019755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:54.019855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:54.019870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:54.019877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:54.019883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:54.019898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:54.029781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:54.029844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:54.029858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:54.029865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:54.029871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:54.029886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:54.039804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:54.039860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:54.039874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:54.039881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:54.039887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:54.039901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:54.049874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:54.049939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:54.049960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:54.049971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:54.049977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:54.049994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:11.800 [2024-11-28 08:26:54.059912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.800 [2024-11-28 08:26:54.060016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.800 [2024-11-28 08:26:54.060032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.800 [2024-11-28 08:26:54.060039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.800 [2024-11-28 08:26:54.060046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:11.800 [2024-11-28 08:26:54.060061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.800 qpair failed and we were unable to recover it. 00:28:12.092 [2024-11-28 08:26:54.069933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.092 [2024-11-28 08:26:54.069995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.092 [2024-11-28 08:26:54.070011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.092 [2024-11-28 08:26:54.070018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.092 [2024-11-28 08:26:54.070024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.092 [2024-11-28 08:26:54.070039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.092 qpair failed and we were unable to recover it. 00:28:12.092 [2024-11-28 08:26:54.079943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.092 [2024-11-28 08:26:54.080055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.092 [2024-11-28 08:26:54.080069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.092 [2024-11-28 08:26:54.080077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.092 [2024-11-28 08:26:54.080084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.092 [2024-11-28 08:26:54.080098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.092 qpair failed and we were unable to recover it. 00:28:12.092 [2024-11-28 08:26:54.089972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.092 [2024-11-28 08:26:54.090033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.092 [2024-11-28 08:26:54.090047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.092 [2024-11-28 08:26:54.090054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.092 [2024-11-28 08:26:54.090060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.092 [2024-11-28 08:26:54.090075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.092 qpair failed and we were unable to recover it. 00:28:12.092 [2024-11-28 08:26:54.099989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.092 [2024-11-28 08:26:54.100049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.092 [2024-11-28 08:26:54.100063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.092 [2024-11-28 08:26:54.100070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.092 [2024-11-28 08:26:54.100076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.092 [2024-11-28 08:26:54.100090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.092 qpair failed and we were unable to recover it. 00:28:12.092 [2024-11-28 08:26:54.109998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.092 [2024-11-28 08:26:54.110056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.092 [2024-11-28 08:26:54.110070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.092 [2024-11-28 08:26:54.110077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.092 [2024-11-28 08:26:54.110083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.092 [2024-11-28 08:26:54.110097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.092 qpair failed and we were unable to recover it. 00:28:12.092 [2024-11-28 08:26:54.120039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.092 [2024-11-28 08:26:54.120095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.092 [2024-11-28 08:26:54.120110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.092 [2024-11-28 08:26:54.120117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.092 [2024-11-28 08:26:54.120123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.092 [2024-11-28 08:26:54.120137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.092 qpair failed and we were unable to recover it. 00:28:12.092 [2024-11-28 08:26:54.130068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.092 [2024-11-28 08:26:54.130127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.092 [2024-11-28 08:26:54.130143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.092 [2024-11-28 08:26:54.130150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.092 [2024-11-28 08:26:54.130156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.092 [2024-11-28 08:26:54.130170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.092 qpair failed and we were unable to recover it. 00:28:12.092 [2024-11-28 08:26:54.140130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.092 [2024-11-28 08:26:54.140210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.092 [2024-11-28 08:26:54.140225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.092 [2024-11-28 08:26:54.140232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.092 [2024-11-28 08:26:54.140238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.092 [2024-11-28 08:26:54.140253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.092 qpair failed and we were unable to recover it. 00:28:12.092 [2024-11-28 08:26:54.150138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.092 [2024-11-28 08:26:54.150193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.092 [2024-11-28 08:26:54.150208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.092 [2024-11-28 08:26:54.150215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.092 [2024-11-28 08:26:54.150221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.092 [2024-11-28 08:26:54.150236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.092 qpair failed and we were unable to recover it. 00:28:12.092 [2024-11-28 08:26:54.160155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.092 [2024-11-28 08:26:54.160255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.092 [2024-11-28 08:26:54.160271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.092 [2024-11-28 08:26:54.160278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.092 [2024-11-28 08:26:54.160284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.092 [2024-11-28 08:26:54.160299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.092 qpair failed and we were unable to recover it. 00:28:12.092 [2024-11-28 08:26:54.170281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.092 [2024-11-28 08:26:54.170363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.092 [2024-11-28 08:26:54.170377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.092 [2024-11-28 08:26:54.170385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.092 [2024-11-28 08:26:54.170391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.092 [2024-11-28 08:26:54.170405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.092 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.180175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.093 [2024-11-28 08:26:54.180260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.093 [2024-11-28 08:26:54.180275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.093 [2024-11-28 08:26:54.180287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.093 [2024-11-28 08:26:54.180295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.093 [2024-11-28 08:26:54.180310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.093 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.190258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.093 [2024-11-28 08:26:54.190315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.093 [2024-11-28 08:26:54.190329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.093 [2024-11-28 08:26:54.190335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.093 [2024-11-28 08:26:54.190342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.093 [2024-11-28 08:26:54.190355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.093 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.200277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.093 [2024-11-28 08:26:54.200381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.093 [2024-11-28 08:26:54.200397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.093 [2024-11-28 08:26:54.200405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.093 [2024-11-28 08:26:54.200411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.093 [2024-11-28 08:26:54.200426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.093 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.210294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.093 [2024-11-28 08:26:54.210352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.093 [2024-11-28 08:26:54.210367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.093 [2024-11-28 08:26:54.210374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.093 [2024-11-28 08:26:54.210380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.093 [2024-11-28 08:26:54.210394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.093 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.220286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.093 [2024-11-28 08:26:54.220346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.093 [2024-11-28 08:26:54.220361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.093 [2024-11-28 08:26:54.220368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.093 [2024-11-28 08:26:54.220374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.093 [2024-11-28 08:26:54.220388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.093 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.230405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.093 [2024-11-28 08:26:54.230464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.093 [2024-11-28 08:26:54.230479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.093 [2024-11-28 08:26:54.230486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.093 [2024-11-28 08:26:54.230493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.093 [2024-11-28 08:26:54.230508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.093 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.240385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.093 [2024-11-28 08:26:54.240441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.093 [2024-11-28 08:26:54.240455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.093 [2024-11-28 08:26:54.240462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.093 [2024-11-28 08:26:54.240468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.093 [2024-11-28 08:26:54.240483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.093 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.250429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.093 [2024-11-28 08:26:54.250484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.093 [2024-11-28 08:26:54.250498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.093 [2024-11-28 08:26:54.250505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.093 [2024-11-28 08:26:54.250511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.093 [2024-11-28 08:26:54.250525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.093 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.260487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.093 [2024-11-28 08:26:54.260592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.093 [2024-11-28 08:26:54.260607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.093 [2024-11-28 08:26:54.260614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.093 [2024-11-28 08:26:54.260620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.093 [2024-11-28 08:26:54.260635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.093 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.270411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.093 [2024-11-28 08:26:54.270472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.093 [2024-11-28 08:26:54.270487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.093 [2024-11-28 08:26:54.270494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.093 [2024-11-28 08:26:54.270500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.093 [2024-11-28 08:26:54.270515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.093 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.280539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.093 [2024-11-28 08:26:54.280597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.093 [2024-11-28 08:26:54.280611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.093 [2024-11-28 08:26:54.280618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.093 [2024-11-28 08:26:54.280624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.093 [2024-11-28 08:26:54.280639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.093 qpair failed and we were unable to recover it. 00:28:12.093 [2024-11-28 08:26:54.290530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.094 [2024-11-28 08:26:54.290585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.094 [2024-11-28 08:26:54.290600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.094 [2024-11-28 08:26:54.290607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.094 [2024-11-28 08:26:54.290613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.094 [2024-11-28 08:26:54.290627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.094 qpair failed and we were unable to recover it. 00:28:12.094 [2024-11-28 08:26:54.300579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.094 [2024-11-28 08:26:54.300638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.094 [2024-11-28 08:26:54.300652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.094 [2024-11-28 08:26:54.300660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.094 [2024-11-28 08:26:54.300665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.094 [2024-11-28 08:26:54.300680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.094 qpair failed and we were unable to recover it. 00:28:12.094 [2024-11-28 08:26:54.310608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.094 [2024-11-28 08:26:54.310705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.094 [2024-11-28 08:26:54.310720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.094 [2024-11-28 08:26:54.310730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.094 [2024-11-28 08:26:54.310737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.094 [2024-11-28 08:26:54.310751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.094 qpair failed and we were unable to recover it. 00:28:12.094 [2024-11-28 08:26:54.320632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.094 [2024-11-28 08:26:54.320689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.094 [2024-11-28 08:26:54.320703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.094 [2024-11-28 08:26:54.320710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.094 [2024-11-28 08:26:54.320716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.094 [2024-11-28 08:26:54.320731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.094 qpair failed and we were unable to recover it. 00:28:12.094 [2024-11-28 08:26:54.330663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.094 [2024-11-28 08:26:54.330720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.094 [2024-11-28 08:26:54.330735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.094 [2024-11-28 08:26:54.330742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.094 [2024-11-28 08:26:54.330748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.094 [2024-11-28 08:26:54.330763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.094 qpair failed and we were unable to recover it. 00:28:12.094 [2024-11-28 08:26:54.340730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.094 [2024-11-28 08:26:54.340831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.094 [2024-11-28 08:26:54.340847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.094 [2024-11-28 08:26:54.340853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.094 [2024-11-28 08:26:54.340860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.094 [2024-11-28 08:26:54.340875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.094 qpair failed and we were unable to recover it. 00:28:12.094 [2024-11-28 08:26:54.350711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.094 [2024-11-28 08:26:54.350792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.094 [2024-11-28 08:26:54.350807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.094 [2024-11-28 08:26:54.350814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.094 [2024-11-28 08:26:54.350820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.094 [2024-11-28 08:26:54.350835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.094 qpair failed and we were unable to recover it. 00:28:12.360 [2024-11-28 08:26:54.360742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.361 [2024-11-28 08:26:54.360797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.361 [2024-11-28 08:26:54.360812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.361 [2024-11-28 08:26:54.360819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.361 [2024-11-28 08:26:54.360824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.361 [2024-11-28 08:26:54.360838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.361 qpair failed and we were unable to recover it. 00:28:12.361 [2024-11-28 08:26:54.370755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.361 [2024-11-28 08:26:54.370817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.361 [2024-11-28 08:26:54.370832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.361 [2024-11-28 08:26:54.370839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.361 [2024-11-28 08:26:54.370845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.361 [2024-11-28 08:26:54.370859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.361 qpair failed and we were unable to recover it. 00:28:12.361 [2024-11-28 08:26:54.380870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.361 [2024-11-28 08:26:54.380961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.361 [2024-11-28 08:26:54.380976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.361 [2024-11-28 08:26:54.380984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.361 [2024-11-28 08:26:54.380990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.361 [2024-11-28 08:26:54.381005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.361 qpair failed and we were unable to recover it. 00:28:12.361 [2024-11-28 08:26:54.390830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.361 [2024-11-28 08:26:54.390888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.361 [2024-11-28 08:26:54.390902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.362 [2024-11-28 08:26:54.390909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.362 [2024-11-28 08:26:54.390915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.362 [2024-11-28 08:26:54.390929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.362 qpair failed and we were unable to recover it. 00:28:12.362 [2024-11-28 08:26:54.400793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.362 [2024-11-28 08:26:54.400888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.362 [2024-11-28 08:26:54.400903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.362 [2024-11-28 08:26:54.400911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.362 [2024-11-28 08:26:54.400917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.362 [2024-11-28 08:26:54.400931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.362 qpair failed and we were unable to recover it. 00:28:12.362 [2024-11-28 08:26:54.410941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.363 [2024-11-28 08:26:54.411006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.363 [2024-11-28 08:26:54.411021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.363 [2024-11-28 08:26:54.411027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.363 [2024-11-28 08:26:54.411034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.363 [2024-11-28 08:26:54.411049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.363 qpair failed and we were unable to recover it. 00:28:12.363 [2024-11-28 08:26:54.420853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.363 [2024-11-28 08:26:54.420913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.363 [2024-11-28 08:26:54.420927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.363 [2024-11-28 08:26:54.420934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.363 [2024-11-28 08:26:54.420940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.363 [2024-11-28 08:26:54.420959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.363 qpair failed and we were unable to recover it. 00:28:12.363 [2024-11-28 08:26:54.430875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.363 [2024-11-28 08:26:54.430933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.363 [2024-11-28 08:26:54.430952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.363 [2024-11-28 08:26:54.430959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.364 [2024-11-28 08:26:54.430965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.364 [2024-11-28 08:26:54.430981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.364 qpair failed and we were unable to recover it. 00:28:12.364 [2024-11-28 08:26:54.440994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.364 [2024-11-28 08:26:54.441057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.364 [2024-11-28 08:26:54.441072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.364 [2024-11-28 08:26:54.441083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.364 [2024-11-28 08:26:54.441089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.364 [2024-11-28 08:26:54.441105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.364 qpair failed and we were unable to recover it. 00:28:12.364 [2024-11-28 08:26:54.450992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.364 [2024-11-28 08:26:54.451084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.364 [2024-11-28 08:26:54.451100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.364 [2024-11-28 08:26:54.451107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.364 [2024-11-28 08:26:54.451113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.364 [2024-11-28 08:26:54.451129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.364 qpair failed and we were unable to recover it. 00:28:12.364 [2024-11-28 08:26:54.460984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.364 [2024-11-28 08:26:54.461045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.364 [2024-11-28 08:26:54.461060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.364 [2024-11-28 08:26:54.461067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.364 [2024-11-28 08:26:54.461073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.364 [2024-11-28 08:26:54.461088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.364 qpair failed and we were unable to recover it. 00:28:12.364 [2024-11-28 08:26:54.471019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.365 [2024-11-28 08:26:54.471089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.365 [2024-11-28 08:26:54.471104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.365 [2024-11-28 08:26:54.471111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.365 [2024-11-28 08:26:54.471117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.365 [2024-11-28 08:26:54.471132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.365 qpair failed and we were unable to recover it. 00:28:12.365 [2024-11-28 08:26:54.481063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.365 [2024-11-28 08:26:54.481115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.365 [2024-11-28 08:26:54.481129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.365 [2024-11-28 08:26:54.481136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.365 [2024-11-28 08:26:54.481144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.365 [2024-11-28 08:26:54.481158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.365 qpair failed and we were unable to recover it. 00:28:12.365 [2024-11-28 08:26:54.491091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.365 [2024-11-28 08:26:54.491151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.365 [2024-11-28 08:26:54.491166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.366 [2024-11-28 08:26:54.491173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.366 [2024-11-28 08:26:54.491179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.366 [2024-11-28 08:26:54.491194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.366 qpair failed and we were unable to recover it. 00:28:12.366 [2024-11-28 08:26:54.501146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.366 [2024-11-28 08:26:54.501206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.366 [2024-11-28 08:26:54.501220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.366 [2024-11-28 08:26:54.501227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.366 [2024-11-28 08:26:54.501233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.366 [2024-11-28 08:26:54.501248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.366 qpair failed and we were unable to recover it. 00:28:12.366 [2024-11-28 08:26:54.511118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.366 [2024-11-28 08:26:54.511172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.366 [2024-11-28 08:26:54.511187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.366 [2024-11-28 08:26:54.511193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.366 [2024-11-28 08:26:54.511199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.366 [2024-11-28 08:26:54.511214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.366 qpair failed and we were unable to recover it. 00:28:12.366 [2024-11-28 08:26:54.521190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.366 [2024-11-28 08:26:54.521247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.366 [2024-11-28 08:26:54.521261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.366 [2024-11-28 08:26:54.521268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.366 [2024-11-28 08:26:54.521274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.366 [2024-11-28 08:26:54.521289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.366 qpair failed and we were unable to recover it. 00:28:12.366 [2024-11-28 08:26:54.531146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.366 [2024-11-28 08:26:54.531241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.366 [2024-11-28 08:26:54.531258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.366 [2024-11-28 08:26:54.531265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.367 [2024-11-28 08:26:54.531271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.367 [2024-11-28 08:26:54.531285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.367 qpair failed and we were unable to recover it. 00:28:12.367 [2024-11-28 08:26:54.541191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.367 [2024-11-28 08:26:54.541250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.367 [2024-11-28 08:26:54.541265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.367 [2024-11-28 08:26:54.541272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.367 [2024-11-28 08:26:54.541278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.367 [2024-11-28 08:26:54.541292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.367 qpair failed and we were unable to recover it. 00:28:12.367 [2024-11-28 08:26:54.551292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.367 [2024-11-28 08:26:54.551350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.367 [2024-11-28 08:26:54.551364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.367 [2024-11-28 08:26:54.551371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.367 [2024-11-28 08:26:54.551376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.367 [2024-11-28 08:26:54.551391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.367 qpair failed and we were unable to recover it. 00:28:12.367 [2024-11-28 08:26:54.561288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.367 [2024-11-28 08:26:54.561345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.367 [2024-11-28 08:26:54.561360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.367 [2024-11-28 08:26:54.561367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.367 [2024-11-28 08:26:54.561373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.367 [2024-11-28 08:26:54.561387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.367 qpair failed and we were unable to recover it. 00:28:12.367 [2024-11-28 08:26:54.571328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.368 [2024-11-28 08:26:54.571410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.368 [2024-11-28 08:26:54.571425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.368 [2024-11-28 08:26:54.571435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.368 [2024-11-28 08:26:54.571441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.368 [2024-11-28 08:26:54.571456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.368 qpair failed and we were unable to recover it. 00:28:12.368 [2024-11-28 08:26:54.581304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.368 [2024-11-28 08:26:54.581363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.368 [2024-11-28 08:26:54.581377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.368 [2024-11-28 08:26:54.581384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.368 [2024-11-28 08:26:54.581390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.368 [2024-11-28 08:26:54.581404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.368 qpair failed and we were unable to recover it. 00:28:12.368 [2024-11-28 08:26:54.591397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.368 [2024-11-28 08:26:54.591455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.368 [2024-11-28 08:26:54.591470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.368 [2024-11-28 08:26:54.591477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.368 [2024-11-28 08:26:54.591483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.368 [2024-11-28 08:26:54.591497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.368 qpair failed and we were unable to recover it. 00:28:12.368 [2024-11-28 08:26:54.601417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.368 [2024-11-28 08:26:54.601477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.368 [2024-11-28 08:26:54.601492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.368 [2024-11-28 08:26:54.601498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.368 [2024-11-28 08:26:54.601505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.368 [2024-11-28 08:26:54.601519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.368 qpair failed and we were unable to recover it. 00:28:12.368 [2024-11-28 08:26:54.611450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.369 [2024-11-28 08:26:54.611513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.369 [2024-11-28 08:26:54.611527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.369 [2024-11-28 08:26:54.611533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.369 [2024-11-28 08:26:54.611540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.369 [2024-11-28 08:26:54.611554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.369 qpair failed and we were unable to recover it. 00:28:12.369 [2024-11-28 08:26:54.621494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.369 [2024-11-28 08:26:54.621563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.369 [2024-11-28 08:26:54.621578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.369 [2024-11-28 08:26:54.621585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.369 [2024-11-28 08:26:54.621591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.369 [2024-11-28 08:26:54.621606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.369 qpair failed and we were unable to recover it. 00:28:12.635 [2024-11-28 08:26:54.631546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.635 [2024-11-28 08:26:54.631603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.635 [2024-11-28 08:26:54.631618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.635 [2024-11-28 08:26:54.631625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.635 [2024-11-28 08:26:54.631631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.635 [2024-11-28 08:26:54.631646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.635 qpair failed and we were unable to recover it. 00:28:12.635 [2024-11-28 08:26:54.641577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.635 [2024-11-28 08:26:54.641640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.635 [2024-11-28 08:26:54.641655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.635 [2024-11-28 08:26:54.641662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.635 [2024-11-28 08:26:54.641669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.635 [2024-11-28 08:26:54.641683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.635 qpair failed and we were unable to recover it. 00:28:12.635 [2024-11-28 08:26:54.651555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.635 [2024-11-28 08:26:54.651614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.635 [2024-11-28 08:26:54.651628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.635 [2024-11-28 08:26:54.651635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.635 [2024-11-28 08:26:54.651641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.635 [2024-11-28 08:26:54.651656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.635 qpair failed and we were unable to recover it. 00:28:12.635 [2024-11-28 08:26:54.661539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.635 [2024-11-28 08:26:54.661625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.635 [2024-11-28 08:26:54.661644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.635 [2024-11-28 08:26:54.661651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.635 [2024-11-28 08:26:54.661657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.661672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.671598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.671658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.671672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.671679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.671685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.671699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.681671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.681733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.681747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.681754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.681759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.681773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.691612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.691670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.691684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.691691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.691696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.691711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.701701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.701773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.701788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.701802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.701808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.701823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.711666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.711723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.711738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.711744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.711750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.711764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.721707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.721761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.721776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.721783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.721789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.721803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.731794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.731849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.731864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.731870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.731876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.731890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.741860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.741921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.741935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.741942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.741953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.741968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.751892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.751958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.751973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.751980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.751986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.752001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.761873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.761928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.761942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.761954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.761960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.761974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.771848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.771906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.771919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.771926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.771932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.771951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.781955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.782014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.782028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.782036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.782041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.782056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.791968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.636 [2024-11-28 08:26:54.792074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.636 [2024-11-28 08:26:54.792092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.636 [2024-11-28 08:26:54.792099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.636 [2024-11-28 08:26:54.792105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.636 [2024-11-28 08:26:54.792120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.636 qpair failed and we were unable to recover it. 00:28:12.636 [2024-11-28 08:26:54.801917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.637 [2024-11-28 08:26:54.802000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.637 [2024-11-28 08:26:54.802016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.637 [2024-11-28 08:26:54.802023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.637 [2024-11-28 08:26:54.802029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.637 [2024-11-28 08:26:54.802044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.637 qpair failed and we were unable to recover it. 00:28:12.637 [2024-11-28 08:26:54.812000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.637 [2024-11-28 08:26:54.812093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.637 [2024-11-28 08:26:54.812108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.637 [2024-11-28 08:26:54.812115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.637 [2024-11-28 08:26:54.812122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.637 [2024-11-28 08:26:54.812137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.637 qpair failed and we were unable to recover it. 00:28:12.637 [2024-11-28 08:26:54.822056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.637 [2024-11-28 08:26:54.822124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.637 [2024-11-28 08:26:54.822138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.637 [2024-11-28 08:26:54.822145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.637 [2024-11-28 08:26:54.822152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.637 [2024-11-28 08:26:54.822166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.637 qpair failed and we were unable to recover it. 00:28:12.637 [2024-11-28 08:26:54.832119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.637 [2024-11-28 08:26:54.832179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.637 [2024-11-28 08:26:54.832194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.637 [2024-11-28 08:26:54.832204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.637 [2024-11-28 08:26:54.832211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.637 [2024-11-28 08:26:54.832226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.637 qpair failed and we were unable to recover it. 00:28:12.637 [2024-11-28 08:26:54.842112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.637 [2024-11-28 08:26:54.842177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.637 [2024-11-28 08:26:54.842192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.637 [2024-11-28 08:26:54.842199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.637 [2024-11-28 08:26:54.842205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.637 [2024-11-28 08:26:54.842219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.637 qpair failed and we were unable to recover it. 00:28:12.637 [2024-11-28 08:26:54.852160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.637 [2024-11-28 08:26:54.852234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.637 [2024-11-28 08:26:54.852250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.637 [2024-11-28 08:26:54.852257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.637 [2024-11-28 08:26:54.852263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.637 [2024-11-28 08:26:54.852278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.637 qpair failed and we were unable to recover it. 00:28:12.637 [2024-11-28 08:26:54.862178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.637 [2024-11-28 08:26:54.862237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.637 [2024-11-28 08:26:54.862251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.637 [2024-11-28 08:26:54.862258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.637 [2024-11-28 08:26:54.862265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.637 [2024-11-28 08:26:54.862279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.637 qpair failed and we were unable to recover it. 00:28:12.637 [2024-11-28 08:26:54.872208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.637 [2024-11-28 08:26:54.872265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.637 [2024-11-28 08:26:54.872279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.637 [2024-11-28 08:26:54.872287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.637 [2024-11-28 08:26:54.872293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.637 [2024-11-28 08:26:54.872307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.637 qpair failed and we were unable to recover it. 00:28:12.637 [2024-11-28 08:26:54.882232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.637 [2024-11-28 08:26:54.882287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.637 [2024-11-28 08:26:54.882302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.637 [2024-11-28 08:26:54.882309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.637 [2024-11-28 08:26:54.882315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.637 [2024-11-28 08:26:54.882330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.637 qpair failed and we were unable to recover it. 00:28:12.637 [2024-11-28 08:26:54.892255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.637 [2024-11-28 08:26:54.892310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.637 [2024-11-28 08:26:54.892324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.637 [2024-11-28 08:26:54.892331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.637 [2024-11-28 08:26:54.892336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.637 [2024-11-28 08:26:54.892350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.637 qpair failed and we were unable to recover it. 00:28:12.898 [2024-11-28 08:26:54.902296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.898 [2024-11-28 08:26:54.902354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.898 [2024-11-28 08:26:54.902370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.898 [2024-11-28 08:26:54.902376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.898 [2024-11-28 08:26:54.902383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.898 [2024-11-28 08:26:54.902397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.898 qpair failed and we were unable to recover it. 00:28:12.898 [2024-11-28 08:26:54.912331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.898 [2024-11-28 08:26:54.912388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.898 [2024-11-28 08:26:54.912404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.898 [2024-11-28 08:26:54.912412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.898 [2024-11-28 08:26:54.912418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.898 [2024-11-28 08:26:54.912434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.898 qpair failed and we were unable to recover it. 00:28:12.898 [2024-11-28 08:26:54.922318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.898 [2024-11-28 08:26:54.922378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.898 [2024-11-28 08:26:54.922396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.898 [2024-11-28 08:26:54.922402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.898 [2024-11-28 08:26:54.922409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.898 [2024-11-28 08:26:54.922423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.898 qpair failed and we were unable to recover it. 00:28:12.898 [2024-11-28 08:26:54.932305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.898 [2024-11-28 08:26:54.932362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.898 [2024-11-28 08:26:54.932377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.898 [2024-11-28 08:26:54.932383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.898 [2024-11-28 08:26:54.932389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.898 [2024-11-28 08:26:54.932404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.898 qpair failed and we were unable to recover it. 00:28:12.898 [2024-11-28 08:26:54.942417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.898 [2024-11-28 08:26:54.942475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.898 [2024-11-28 08:26:54.942490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.898 [2024-11-28 08:26:54.942496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:54.942503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:54.942517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:54.952492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:54.952554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:54.952568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:54.952574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:54.952580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:54.952595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:54.962486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:54.962542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:54.962557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:54.962568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:54.962574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:54.962588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:54.972488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:54.972554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:54.972568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:54.972575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:54.972582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:54.972596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:54.982458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:54.982515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:54.982530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:54.982536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:54.982543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:54.982557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:54.992557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:54.992614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:54.992628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:54.992635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:54.992641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:54.992655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:55.002584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:55.002647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:55.002662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:55.002669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:55.002676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:55.002691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:55.012594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:55.012669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:55.012684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:55.012691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:55.012697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:55.012712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:55.022659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:55.022716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:55.022730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:55.022737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:55.022743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:55.022757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:55.032737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:55.032796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:55.032811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:55.032818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:55.032824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:55.032838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:55.042700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:55.042754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:55.042769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:55.042776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:55.042782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:55.042796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:55.052748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:55.052806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:55.052827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:55.052834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:55.052841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:55.052857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:55.062766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:55.062826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:55.062841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:55.062848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:55.062854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.899 [2024-11-28 08:26:55.062869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.899 qpair failed and we were unable to recover it. 00:28:12.899 [2024-11-28 08:26:55.072795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.899 [2024-11-28 08:26:55.072856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.899 [2024-11-28 08:26:55.072871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.899 [2024-11-28 08:26:55.072877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.899 [2024-11-28 08:26:55.072883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.900 [2024-11-28 08:26:55.072898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.900 qpair failed and we were unable to recover it. 00:28:12.900 [2024-11-28 08:26:55.082817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.900 [2024-11-28 08:26:55.082914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.900 [2024-11-28 08:26:55.082930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.900 [2024-11-28 08:26:55.082936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.900 [2024-11-28 08:26:55.082942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.900 [2024-11-28 08:26:55.082961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.900 qpair failed and we were unable to recover it. 00:28:12.900 [2024-11-28 08:26:55.092844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.900 [2024-11-28 08:26:55.092903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.900 [2024-11-28 08:26:55.092919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.900 [2024-11-28 08:26:55.092929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.900 [2024-11-28 08:26:55.092935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.900 [2024-11-28 08:26:55.092954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.900 qpair failed and we were unable to recover it. 00:28:12.900 [2024-11-28 08:26:55.102884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.900 [2024-11-28 08:26:55.102942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.900 [2024-11-28 08:26:55.102960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.900 [2024-11-28 08:26:55.102967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.900 [2024-11-28 08:26:55.102973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.900 [2024-11-28 08:26:55.102988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.900 qpair failed and we were unable to recover it. 00:28:12.900 [2024-11-28 08:26:55.112903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.900 [2024-11-28 08:26:55.112973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.900 [2024-11-28 08:26:55.112988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.900 [2024-11-28 08:26:55.112995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.900 [2024-11-28 08:26:55.113001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.900 [2024-11-28 08:26:55.113016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.900 qpair failed and we were unable to recover it. 00:28:12.900 [2024-11-28 08:26:55.122939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.900 [2024-11-28 08:26:55.122996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.900 [2024-11-28 08:26:55.123011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.900 [2024-11-28 08:26:55.123018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.900 [2024-11-28 08:26:55.123024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.900 [2024-11-28 08:26:55.123038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.900 qpair failed and we were unable to recover it. 00:28:12.900 [2024-11-28 08:26:55.132974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.900 [2024-11-28 08:26:55.133031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.900 [2024-11-28 08:26:55.133045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.900 [2024-11-28 08:26:55.133052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.900 [2024-11-28 08:26:55.133059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.900 [2024-11-28 08:26:55.133074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.900 qpair failed and we were unable to recover it. 00:28:12.900 [2024-11-28 08:26:55.143010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.900 [2024-11-28 08:26:55.143071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.900 [2024-11-28 08:26:55.143086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.900 [2024-11-28 08:26:55.143092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.900 [2024-11-28 08:26:55.143098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.900 [2024-11-28 08:26:55.143113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.900 qpair failed and we were unable to recover it. 00:28:12.900 [2024-11-28 08:26:55.153031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.900 [2024-11-28 08:26:55.153091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.900 [2024-11-28 08:26:55.153107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.900 [2024-11-28 08:26:55.153113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.900 [2024-11-28 08:26:55.153120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.900 [2024-11-28 08:26:55.153134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.900 qpair failed and we were unable to recover it. 00:28:12.900 [2024-11-28 08:26:55.163049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:12.900 [2024-11-28 08:26:55.163109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:12.900 [2024-11-28 08:26:55.163124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:12.900 [2024-11-28 08:26:55.163131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:12.900 [2024-11-28 08:26:55.163137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:12.900 [2024-11-28 08:26:55.163152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:12.900 qpair failed and we were unable to recover it. 00:28:13.161 [2024-11-28 08:26:55.173109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.161 [2024-11-28 08:26:55.173218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.161 [2024-11-28 08:26:55.173234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.161 [2024-11-28 08:26:55.173241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.161 [2024-11-28 08:26:55.173247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.161 [2024-11-28 08:26:55.173261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.161 qpair failed and we were unable to recover it. 00:28:13.161 [2024-11-28 08:26:55.183069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.161 [2024-11-28 08:26:55.183128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.161 [2024-11-28 08:26:55.183146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.161 [2024-11-28 08:26:55.183153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.161 [2024-11-28 08:26:55.183159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.161 [2024-11-28 08:26:55.183173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.161 qpair failed and we were unable to recover it. 00:28:13.161 [2024-11-28 08:26:55.193141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.161 [2024-11-28 08:26:55.193199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.161 [2024-11-28 08:26:55.193214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.161 [2024-11-28 08:26:55.193220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.161 [2024-11-28 08:26:55.193226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.161 [2024-11-28 08:26:55.193241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.161 qpair failed and we were unable to recover it. 00:28:13.161 [2024-11-28 08:26:55.203162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.161 [2024-11-28 08:26:55.203221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.161 [2024-11-28 08:26:55.203236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.161 [2024-11-28 08:26:55.203243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.161 [2024-11-28 08:26:55.203250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.161 [2024-11-28 08:26:55.203265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.213214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.213279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.213294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.213301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.213307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.213322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.223174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.223252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.223268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.223278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.223285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.223300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.233289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.233360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.233376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.233383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.233389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.233410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.243336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.243395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.243409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.243416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.243422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.243436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.253314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.253375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.253390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.253397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.253403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.253417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.263401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.263502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.263517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.263524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.263530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.263544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.273417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.273484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.273499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.273506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.273512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.273526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.283404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.283495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.283510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.283517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.283524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.283539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.293472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.293537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.293552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.293560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.293566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.293581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.303503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.303577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.303595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.303602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.303608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.303624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.313475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.313535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.313553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.313560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.313567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.313582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.323514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.323568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.323582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.323589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.323595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.323609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.333477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.162 [2024-11-28 08:26:55.333534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.162 [2024-11-28 08:26:55.333550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.162 [2024-11-28 08:26:55.333557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.162 [2024-11-28 08:26:55.333562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.162 [2024-11-28 08:26:55.333577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.162 qpair failed and we were unable to recover it. 00:28:13.162 [2024-11-28 08:26:55.343582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.163 [2024-11-28 08:26:55.343640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.163 [2024-11-28 08:26:55.343654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.163 [2024-11-28 08:26:55.343662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.163 [2024-11-28 08:26:55.343667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.163 [2024-11-28 08:26:55.343682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.163 qpair failed and we were unable to recover it. 00:28:13.163 [2024-11-28 08:26:55.353613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.163 [2024-11-28 08:26:55.353673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.163 [2024-11-28 08:26:55.353687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.163 [2024-11-28 08:26:55.353697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.163 [2024-11-28 08:26:55.353703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.163 [2024-11-28 08:26:55.353717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.163 qpair failed and we were unable to recover it. 00:28:13.163 [2024-11-28 08:26:55.363638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.163 [2024-11-28 08:26:55.363692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.163 [2024-11-28 08:26:55.363706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.163 [2024-11-28 08:26:55.363713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.163 [2024-11-28 08:26:55.363719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.163 [2024-11-28 08:26:55.363733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.163 qpair failed and we were unable to recover it. 00:28:13.163 [2024-11-28 08:26:55.373687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.163 [2024-11-28 08:26:55.373759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.163 [2024-11-28 08:26:55.373774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.163 [2024-11-28 08:26:55.373781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.163 [2024-11-28 08:26:55.373787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.163 [2024-11-28 08:26:55.373802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.163 qpair failed and we were unable to recover it. 00:28:13.163 [2024-11-28 08:26:55.383717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.163 [2024-11-28 08:26:55.383784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.163 [2024-11-28 08:26:55.383798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.163 [2024-11-28 08:26:55.383805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.163 [2024-11-28 08:26:55.383811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.163 [2024-11-28 08:26:55.383826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.163 qpair failed and we were unable to recover it. 00:28:13.163 [2024-11-28 08:26:55.393724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.163 [2024-11-28 08:26:55.393814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.163 [2024-11-28 08:26:55.393830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.163 [2024-11-28 08:26:55.393837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.163 [2024-11-28 08:26:55.393843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.163 [2024-11-28 08:26:55.393858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.163 qpair failed and we were unable to recover it. 00:28:13.163 [2024-11-28 08:26:55.403747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.163 [2024-11-28 08:26:55.403803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.163 [2024-11-28 08:26:55.403819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.163 [2024-11-28 08:26:55.403826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.163 [2024-11-28 08:26:55.403832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.163 [2024-11-28 08:26:55.403846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.163 qpair failed and we were unable to recover it. 00:28:13.163 [2024-11-28 08:26:55.413769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.163 [2024-11-28 08:26:55.413838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.163 [2024-11-28 08:26:55.413853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.163 [2024-11-28 08:26:55.413860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.163 [2024-11-28 08:26:55.413866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.163 [2024-11-28 08:26:55.413880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.163 qpair failed and we were unable to recover it. 00:28:13.163 [2024-11-28 08:26:55.423840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.163 [2024-11-28 08:26:55.423945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.163 [2024-11-28 08:26:55.423964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.163 [2024-11-28 08:26:55.423971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.163 [2024-11-28 08:26:55.423978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.163 [2024-11-28 08:26:55.423992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.163 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.433802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.424 [2024-11-28 08:26:55.433860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.424 [2024-11-28 08:26:55.433875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.424 [2024-11-28 08:26:55.433882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.424 [2024-11-28 08:26:55.433888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.424 [2024-11-28 08:26:55.433902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.443917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.424 [2024-11-28 08:26:55.444024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.424 [2024-11-28 08:26:55.444043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.424 [2024-11-28 08:26:55.444050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.424 [2024-11-28 08:26:55.444056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.424 [2024-11-28 08:26:55.444071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.453884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.424 [2024-11-28 08:26:55.453944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.424 [2024-11-28 08:26:55.453963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.424 [2024-11-28 08:26:55.453970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.424 [2024-11-28 08:26:55.453976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.424 [2024-11-28 08:26:55.453992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.463966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.424 [2024-11-28 08:26:55.464063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.424 [2024-11-28 08:26:55.464078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.424 [2024-11-28 08:26:55.464085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.424 [2024-11-28 08:26:55.464092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.424 [2024-11-28 08:26:55.464106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.473937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.424 [2024-11-28 08:26:55.473999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.424 [2024-11-28 08:26:55.474013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.424 [2024-11-28 08:26:55.474020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.424 [2024-11-28 08:26:55.474026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.424 [2024-11-28 08:26:55.474040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.483974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.424 [2024-11-28 08:26:55.484033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.424 [2024-11-28 08:26:55.484047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.424 [2024-11-28 08:26:55.484057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.424 [2024-11-28 08:26:55.484063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.424 [2024-11-28 08:26:55.484077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.494032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.424 [2024-11-28 08:26:55.494087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.424 [2024-11-28 08:26:55.494102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.424 [2024-11-28 08:26:55.494109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.424 [2024-11-28 08:26:55.494114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.424 [2024-11-28 08:26:55.494129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.504033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.424 [2024-11-28 08:26:55.504091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.424 [2024-11-28 08:26:55.504106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.424 [2024-11-28 08:26:55.504112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.424 [2024-11-28 08:26:55.504118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.424 [2024-11-28 08:26:55.504132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.514065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.424 [2024-11-28 08:26:55.514121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.424 [2024-11-28 08:26:55.514135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.424 [2024-11-28 08:26:55.514142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.424 [2024-11-28 08:26:55.514148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.424 [2024-11-28 08:26:55.514162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.524078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.424 [2024-11-28 08:26:55.524131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.424 [2024-11-28 08:26:55.524145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.424 [2024-11-28 08:26:55.524151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.424 [2024-11-28 08:26:55.524157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.424 [2024-11-28 08:26:55.524171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.534098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.424 [2024-11-28 08:26:55.534170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.424 [2024-11-28 08:26:55.534185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.424 [2024-11-28 08:26:55.534192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.424 [2024-11-28 08:26:55.534199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.424 [2024-11-28 08:26:55.534214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.424 qpair failed and we were unable to recover it. 00:28:13.424 [2024-11-28 08:26:55.544202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.544306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.544322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.544329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.544335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.544350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.554174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.554234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.554248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.554255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.554262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.554276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.564183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.564243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.564257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.564264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.564270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.564285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.574147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.574204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.574221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.574228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.574235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.574250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.584261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.584318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.584332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.584339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.584345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.584358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.594332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.594396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.594410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.594416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.594422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.594436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.604225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.604281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.604295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.604302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.604308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.604323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.614315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.614372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.614386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.614396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.614402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.614417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.624429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.624507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.624522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.624529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.624535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.624549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.634462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.634536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.634550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.634558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.634564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.634579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.644338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.644395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.644410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.644417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.644423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.644438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.654486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.654545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.654559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.654566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.654572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.654587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.425 [2024-11-28 08:26:55.664516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.425 [2024-11-28 08:26:55.664596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.425 [2024-11-28 08:26:55.664612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.425 [2024-11-28 08:26:55.664619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.425 [2024-11-28 08:26:55.664625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.425 [2024-11-28 08:26:55.664640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.425 qpair failed and we were unable to recover it. 00:28:13.426 [2024-11-28 08:26:55.674487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.426 [2024-11-28 08:26:55.674543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.426 [2024-11-28 08:26:55.674557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.426 [2024-11-28 08:26:55.674564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.426 [2024-11-28 08:26:55.674570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.426 [2024-11-28 08:26:55.674585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.426 [2024-11-28 08:26:55.684513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.426 [2024-11-28 08:26:55.684568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.426 [2024-11-28 08:26:55.684581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.426 [2024-11-28 08:26:55.684589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.426 [2024-11-28 08:26:55.684595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.426 [2024-11-28 08:26:55.684609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.426 qpair failed and we were unable to recover it. 00:28:13.687 [2024-11-28 08:26:55.694547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.687 [2024-11-28 08:26:55.694600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.694614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.694621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.694627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.694641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.704620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.704680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.704698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.704704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.704710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.704725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.714584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.714644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.714658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.714666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.714671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.714686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.724649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.724707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.724721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.724728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.724734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.724749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.734658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.734717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.734733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.734740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.734746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.734761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.744750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.744856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.744872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.744881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.744888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.744903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.754729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.754790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.754806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.754812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.754818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.754834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.764793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.764850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.764865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.764872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.764878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.764893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.774762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.774822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.774836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.774843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.774849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.774864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.784837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.784909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.784922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.784929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.784936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.784954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.794827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.794886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.794901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.794908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.794914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.794928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.804848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.804904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.804918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.804925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.804931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.804945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.688 [2024-11-28 08:26:55.814867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.688 [2024-11-28 08:26:55.814923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.688 [2024-11-28 08:26:55.814938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.688 [2024-11-28 08:26:55.814945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.688 [2024-11-28 08:26:55.814955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.688 [2024-11-28 08:26:55.814969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.688 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.824958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.825061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.825076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.825083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.825089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.825104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.834954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.835042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.835062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.835069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.835078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.835093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.844992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.845053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.845069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.845075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.845081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.845097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.855019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.855116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.855131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.855139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.855145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.855161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.865053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.865123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.865137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.865145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.865151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.865166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.875065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.875140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.875155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.875166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.875173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.875187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.885099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.885155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.885169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.885176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.885182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.885196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.895122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.895212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.895227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.895234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.895240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.895255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.905138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.905195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.905209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.905216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.905222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.905237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.915113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.915172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.915187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.915195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.915201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.915215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.925137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.925197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.925212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.925219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.925226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.925240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.935165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.935215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.935230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.935237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.935243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.935257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.689 [2024-11-28 08:26:55.945263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.689 [2024-11-28 08:26:55.945325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.689 [2024-11-28 08:26:55.945340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.689 [2024-11-28 08:26:55.945347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.689 [2024-11-28 08:26:55.945352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.689 [2024-11-28 08:26:55.945367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.689 qpair failed and we were unable to recover it. 00:28:13.948 [2024-11-28 08:26:55.955278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.948 [2024-11-28 08:26:55.955334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.948 [2024-11-28 08:26:55.955348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.948 [2024-11-28 08:26:55.955355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.948 [2024-11-28 08:26:55.955361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.948 [2024-11-28 08:26:55.955375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.948 qpair failed and we were unable to recover it. 00:28:13.948 [2024-11-28 08:26:55.965263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.948 [2024-11-28 08:26:55.965347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.948 [2024-11-28 08:26:55.965366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.948 [2024-11-28 08:26:55.965373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.948 [2024-11-28 08:26:55.965379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.948 [2024-11-28 08:26:55.965393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.948 qpair failed and we were unable to recover it. 00:28:13.948 [2024-11-28 08:26:55.975285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.948 [2024-11-28 08:26:55.975342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.948 [2024-11-28 08:26:55.975356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.948 [2024-11-28 08:26:55.975362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.948 [2024-11-28 08:26:55.975368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.948 [2024-11-28 08:26:55.975382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.948 qpair failed and we were unable to recover it. 00:28:13.948 [2024-11-28 08:26:55.985361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.949 [2024-11-28 08:26:55.985416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.949 [2024-11-28 08:26:55.985430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.949 [2024-11-28 08:26:55.985437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.949 [2024-11-28 08:26:55.985443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.949 [2024-11-28 08:26:55.985457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.949 qpair failed and we were unable to recover it. 00:28:13.949 [2024-11-28 08:26:55.995383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.949 [2024-11-28 08:26:55.995441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.949 [2024-11-28 08:26:55.995455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.949 [2024-11-28 08:26:55.995462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.949 [2024-11-28 08:26:55.995468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.949 [2024-11-28 08:26:55.995482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.949 qpair failed and we were unable to recover it. 00:28:13.949 [2024-11-28 08:26:56.005437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.949 [2024-11-28 08:26:56.005495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.949 [2024-11-28 08:26:56.005510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.949 [2024-11-28 08:26:56.005520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.949 [2024-11-28 08:26:56.005526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.949 [2024-11-28 08:26:56.005542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.949 qpair failed and we were unable to recover it. 00:28:13.949 [2024-11-28 08:26:56.015396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.949 [2024-11-28 08:26:56.015450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.949 [2024-11-28 08:26:56.015465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.949 [2024-11-28 08:26:56.015472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.949 [2024-11-28 08:26:56.015478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.949 [2024-11-28 08:26:56.015492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.949 qpair failed and we were unable to recover it. 00:28:13.949 [2024-11-28 08:26:56.025491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.949 [2024-11-28 08:26:56.025552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.949 [2024-11-28 08:26:56.025567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.949 [2024-11-28 08:26:56.025573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.949 [2024-11-28 08:26:56.025579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x991be0 00:28:13.949 [2024-11-28 08:26:56.025593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.949 qpair failed and we were unable to recover it. 00:28:13.949 [2024-11-28 08:26:56.035513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.949 [2024-11-28 08:26:56.035579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.949 [2024-11-28 08:26:56.035603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.949 [2024-11-28 08:26:56.035614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.949 [2024-11-28 08:26:56.035622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6c30000b90 00:28:13.949 [2024-11-28 08:26:56.035644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.949 qpair failed and we were unable to recover it. 00:28:13.949 [2024-11-28 08:26:56.045478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.949 [2024-11-28 08:26:56.045536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.949 [2024-11-28 08:26:56.045551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.949 [2024-11-28 08:26:56.045559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.949 [2024-11-28 08:26:56.045564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6c30000b90 00:28:13.949 [2024-11-28 08:26:56.045581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:13.949 qpair failed and we were unable to recover it. 00:28:13.949 Controller properly reset. 00:28:13.949 Initializing NVMe Controllers 00:28:13.949 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:13.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:13.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:13.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:13.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:13.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:13.949 Initialization complete. Launching workers. 00:28:13.949 Starting thread on core 1 00:28:13.949 Starting thread on core 2 00:28:13.949 Starting thread on core 3 00:28:13.949 Starting thread on core 0 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:13.949 00:28:13.949 real 0m11.337s 00:28:13.949 user 0m22.034s 00:28:13.949 sys 0m4.561s 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:13.949 ************************************ 00:28:13.949 END TEST nvmf_target_disconnect_tc2 00:28:13.949 ************************************ 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.949 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.949 rmmod nvme_tcp 00:28:13.949 rmmod nvme_fabrics 00:28:13.949 rmmod nvme_keyring 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1520311 ']' 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1520311 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1520311 ']' 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1520311 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1520311 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1520311' 00:28:14.208 killing process with pid 1520311 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1520311 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1520311 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.208 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:14.467 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:14.467 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.467 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.468 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.468 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.468 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.468 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.468 08:26:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.485 08:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:16.485 00:28:16.485 real 0m19.984s 00:28:16.485 user 0m49.316s 00:28:16.485 sys 0m9.460s 00:28:16.485 08:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.485 08:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:16.485 ************************************ 00:28:16.485 END TEST nvmf_target_disconnect 00:28:16.485 ************************************ 00:28:16.485 08:26:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:16.486 00:28:16.486 real 5m38.316s 00:28:16.486 user 10m20.888s 00:28:16.486 sys 1m51.032s 00:28:16.486 08:26:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.486 08:26:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.486 ************************************ 00:28:16.486 END TEST nvmf_host 00:28:16.486 ************************************ 00:28:16.486 08:26:58 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:16.486 08:26:58 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:16.486 08:26:58 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:16.486 08:26:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:16.486 08:26:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.486 08:26:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:16.486 ************************************ 00:28:16.486 START TEST nvmf_target_core_interrupt_mode 00:28:16.486 ************************************ 00:28:16.486 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:16.486 * Looking for test storage... 00:28:16.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:16.486 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:16.486 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:28:16.486 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:16.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.769 --rc genhtml_branch_coverage=1 00:28:16.769 --rc genhtml_function_coverage=1 00:28:16.769 --rc genhtml_legend=1 00:28:16.769 --rc geninfo_all_blocks=1 00:28:16.769 --rc geninfo_unexecuted_blocks=1 00:28:16.769 00:28:16.769 ' 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:16.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.769 --rc genhtml_branch_coverage=1 00:28:16.769 --rc genhtml_function_coverage=1 00:28:16.769 --rc genhtml_legend=1 00:28:16.769 --rc geninfo_all_blocks=1 00:28:16.769 --rc geninfo_unexecuted_blocks=1 00:28:16.769 00:28:16.769 ' 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:16.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.769 --rc genhtml_branch_coverage=1 00:28:16.769 --rc genhtml_function_coverage=1 00:28:16.769 --rc genhtml_legend=1 00:28:16.769 --rc geninfo_all_blocks=1 00:28:16.769 --rc geninfo_unexecuted_blocks=1 00:28:16.769 00:28:16.769 ' 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:16.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.769 --rc genhtml_branch_coverage=1 00:28:16.769 --rc genhtml_function_coverage=1 00:28:16.769 --rc genhtml_legend=1 00:28:16.769 --rc geninfo_all_blocks=1 00:28:16.769 --rc geninfo_unexecuted_blocks=1 00:28:16.769 00:28:16.769 ' 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.769 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:16.770 ************************************ 00:28:16.770 START TEST nvmf_abort 00:28:16.770 ************************************ 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:16.770 * Looking for test storage... 00:28:16.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:28:16.770 08:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:16.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.770 --rc genhtml_branch_coverage=1 00:28:16.770 --rc genhtml_function_coverage=1 00:28:16.770 --rc genhtml_legend=1 00:28:16.770 --rc geninfo_all_blocks=1 00:28:16.770 --rc geninfo_unexecuted_blocks=1 00:28:16.770 00:28:16.770 ' 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:16.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.770 --rc genhtml_branch_coverage=1 00:28:16.770 --rc genhtml_function_coverage=1 00:28:16.770 --rc genhtml_legend=1 00:28:16.770 --rc geninfo_all_blocks=1 00:28:16.770 --rc geninfo_unexecuted_blocks=1 00:28:16.770 00:28:16.770 ' 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:16.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.770 --rc genhtml_branch_coverage=1 00:28:16.770 --rc genhtml_function_coverage=1 00:28:16.770 --rc genhtml_legend=1 00:28:16.770 --rc geninfo_all_blocks=1 00:28:16.770 --rc geninfo_unexecuted_blocks=1 00:28:16.770 00:28:16.770 ' 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:16.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.770 --rc genhtml_branch_coverage=1 00:28:16.770 --rc genhtml_function_coverage=1 00:28:16.770 --rc genhtml_legend=1 00:28:16.770 --rc geninfo_all_blocks=1 00:28:16.770 --rc geninfo_unexecuted_blocks=1 00:28:16.770 00:28:16.770 ' 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.770 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.055 08:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.332 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:22.333 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:22.333 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:22.333 Found net devices under 0000:86:00.0: cvl_0_0 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:22.333 Found net devices under 0000:86:00.1: cvl_0_1 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.333 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:28:22.593 00:28:22.593 --- 10.0.0.2 ping statistics --- 00:28:22.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.593 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:28:22.593 00:28:22.593 --- 10.0.0.1 ping statistics --- 00:28:22.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.593 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1525091 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1525091 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1525091 ']' 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.593 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.593 [2024-11-28 08:27:04.724780] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:22.593 [2024-11-28 08:27:04.725852] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:28:22.593 [2024-11-28 08:27:04.725897] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.593 [2024-11-28 08:27:04.795870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:22.593 [2024-11-28 08:27:04.840973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.593 [2024-11-28 08:27:04.841009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.593 [2024-11-28 08:27:04.841016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.593 [2024-11-28 08:27:04.841022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.593 [2024-11-28 08:27:04.841027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.593 [2024-11-28 08:27:04.842434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.593 [2024-11-28 08:27:04.842459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.593 [2024-11-28 08:27:04.842460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.853 [2024-11-28 08:27:04.911652] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:22.853 [2024-11-28 08:27:04.911666] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:22.853 [2024-11-28 08:27:04.911790] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:22.853 [2024-11-28 08:27:04.911900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.853 [2024-11-28 08:27:04.975160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.853 08:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.853 Malloc0 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.853 Delay0 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.853 [2024-11-28 08:27:05.043187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.853 08:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:23.111 [2024-11-28 08:27:05.145672] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:25.644 Initializing NVMe Controllers 00:28:25.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:25.644 controller IO queue size 128 less than required 00:28:25.644 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:25.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:25.644 Initialization complete. Launching workers. 00:28:25.644 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36657 00:28:25.644 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36718, failed to submit 66 00:28:25.644 success 36657, unsuccessful 61, failed 0 00:28:25.644 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:25.644 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.644 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:25.644 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.644 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:25.644 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:25.644 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:25.644 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:25.645 rmmod nvme_tcp 00:28:25.645 rmmod nvme_fabrics 00:28:25.645 rmmod nvme_keyring 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1525091 ']' 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1525091 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1525091 ']' 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1525091 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1525091 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1525091' 00:28:25.645 killing process with pid 1525091 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1525091 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1525091 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.645 08:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.551 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:27.551 00:28:27.551 real 0m10.827s 00:28:27.551 user 0m10.451s 00:28:27.551 sys 0m5.442s 00:28:27.551 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.551 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:27.551 ************************************ 00:28:27.551 END TEST nvmf_abort 00:28:27.551 ************************************ 00:28:27.551 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:27.551 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:27.551 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.551 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:27.551 ************************************ 00:28:27.551 START TEST nvmf_ns_hotplug_stress 00:28:27.552 ************************************ 00:28:27.552 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:27.811 * Looking for test storage... 00:28:27.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:27.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.811 --rc genhtml_branch_coverage=1 00:28:27.811 --rc genhtml_function_coverage=1 00:28:27.811 --rc genhtml_legend=1 00:28:27.811 --rc geninfo_all_blocks=1 00:28:27.811 --rc geninfo_unexecuted_blocks=1 00:28:27.811 00:28:27.811 ' 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:27.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.811 --rc genhtml_branch_coverage=1 00:28:27.811 --rc genhtml_function_coverage=1 00:28:27.811 --rc genhtml_legend=1 00:28:27.811 --rc geninfo_all_blocks=1 00:28:27.811 --rc geninfo_unexecuted_blocks=1 00:28:27.811 00:28:27.811 ' 00:28:27.811 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:27.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.811 --rc genhtml_branch_coverage=1 00:28:27.811 --rc genhtml_function_coverage=1 00:28:27.812 --rc genhtml_legend=1 00:28:27.812 --rc geninfo_all_blocks=1 00:28:27.812 --rc geninfo_unexecuted_blocks=1 00:28:27.812 00:28:27.812 ' 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:27.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.812 --rc genhtml_branch_coverage=1 00:28:27.812 --rc genhtml_function_coverage=1 00:28:27.812 --rc genhtml_legend=1 00:28:27.812 --rc geninfo_all_blocks=1 00:28:27.812 --rc geninfo_unexecuted_blocks=1 00:28:27.812 00:28:27.812 ' 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:27.812 08:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:33.084 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:33.085 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:33.085 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:33.085 Found net devices under 0000:86:00.0: cvl_0_0 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:33.085 Found net devices under 0000:86:00.1: cvl_0_1 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:28:33.085 00:28:33.085 --- 10.0.0.2 ping statistics --- 00:28:33.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.085 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:28:33.085 00:28:33.085 --- 10.0.0.1 ping statistics --- 00:28:33.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.085 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:28:33.085 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1529365 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1529365 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1529365 ']' 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.086 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:33.344 [2024-11-28 08:27:15.380409] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:33.344 [2024-11-28 08:27:15.381393] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:28:33.344 [2024-11-28 08:27:15.381429] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.344 [2024-11-28 08:27:15.448019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:33.344 [2024-11-28 08:27:15.490677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.344 [2024-11-28 08:27:15.490712] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.344 [2024-11-28 08:27:15.490719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.344 [2024-11-28 08:27:15.490725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.344 [2024-11-28 08:27:15.490730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.344 [2024-11-28 08:27:15.492126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.344 [2024-11-28 08:27:15.492212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.344 [2024-11-28 08:27:15.492213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.344 [2024-11-28 08:27:15.560040] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:33.344 [2024-11-28 08:27:15.560060] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:33.344 [2024-11-28 08:27:15.560266] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:33.344 [2024-11-28 08:27:15.560341] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:33.344 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.344 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:33.344 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.344 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.344 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:33.603 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.603 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:33.603 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:33.603 [2024-11-28 08:27:15.792907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.603 08:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:33.862 08:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.121 [2024-11-28 08:27:16.184978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.121 08:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:34.380 08:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:34.380 Malloc0 00:28:34.381 08:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:34.640 Delay0 00:28:34.640 08:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.899 08:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:34.899 NULL1 00:28:34.899 08:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:35.161 08:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1529690 00:28:35.161 08:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:35.161 08:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:35.161 08:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.536 Read completed with error (sct=0, sc=11) 00:28:36.536 08:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.795 08:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:36.795 08:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:36.795 true 00:28:36.795 08:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:36.795 08:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.733 08:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.992 08:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:37.992 08:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:37.992 true 00:28:37.992 08:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:37.992 08:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.251 08:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.511 08:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:38.511 08:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:38.769 true 00:28:38.769 08:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:38.769 08:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.706 08:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.965 08:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:39.965 08:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:40.224 true 00:28:40.224 08:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:40.224 08:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.483 08:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.483 08:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:40.483 08:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:40.741 true 00:28:40.741 08:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:40.741 08:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:42.116 08:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:42.116 08:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:42.116 08:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:42.374 true 00:28:42.374 08:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:42.374 08:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.308 08:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.567 08:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:43.567 08:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:43.567 true 00:28:43.567 08:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:43.567 08:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.826 08:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.085 08:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:44.085 08:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:44.345 true 00:28:44.345 08:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:44.345 08:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.283 08:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.542 08:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:45.542 08:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:45.542 true 00:28:45.801 08:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:45.801 08:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.801 08:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.059 08:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:46.059 08:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:46.319 true 00:28:46.319 08:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:46.319 08:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.258 08:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.517 08:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:47.517 08:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:47.776 true 00:28:47.776 08:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:47.776 08:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.714 08:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.714 08:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:48.714 08:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:48.973 true 00:28:48.973 08:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:48.973 08:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.232 08:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:49.491 08:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:49.491 08:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:49.491 true 00:28:49.750 08:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:49.750 08:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.687 08:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.945 08:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:50.945 08:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:50.946 true 00:28:50.946 08:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:50.946 08:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.205 08:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.465 08:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:51.465 08:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:51.724 true 00:28:51.724 08:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:51.724 08:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.661 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.661 08:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.661 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.661 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.920 08:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:52.920 08:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:53.178 true 00:28:53.178 08:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:53.178 08:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.115 08:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.115 08:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:54.115 08:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:54.374 true 00:28:54.374 08:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:54.374 08:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.633 08:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.891 08:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:54.891 08:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:54.891 true 00:28:54.892 08:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:54.892 08:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.271 08:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:56.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.271 08:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:56.271 08:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:56.529 true 00:28:56.529 08:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:56.529 08:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:57.466 08:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.466 08:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:57.466 08:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:57.725 true 00:28:57.725 08:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:57.725 08:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.725 08:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.984 08:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:57.984 08:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:58.242 true 00:28:58.242 08:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:58.242 08:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.619 08:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.619 08:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:59.619 08:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:59.878 true 00:28:59.878 08:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:28:59.878 08:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.814 08:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.814 08:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:00.814 08:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:01.073 true 00:29:01.073 08:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:29:01.073 08:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.332 08:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.332 08:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:01.332 08:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:01.591 true 00:29:01.591 08:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:29:01.591 08:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.970 08:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.970 08:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:02.970 08:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:03.230 true 00:29:03.230 08:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:29:03.230 08:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.166 08:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.166 08:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:04.166 08:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:04.425 true 00:29:04.425 08:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:29:04.425 08:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.684 08:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.684 08:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:04.684 08:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:04.943 true 00:29:04.943 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:29:04.943 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.321 Initializing NVMe Controllers 00:29:06.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:06.321 Controller IO queue size 128, less than required. 00:29:06.321 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:06.321 Controller IO queue size 128, less than required. 00:29:06.321 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:06.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:06.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:06.321 Initialization complete. Launching workers. 00:29:06.321 ======================================================== 00:29:06.321 Latency(us) 00:29:06.321 Device Information : IOPS MiB/s Average min max 00:29:06.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1877.10 0.92 46850.58 2645.11 1031446.60 00:29:06.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17099.00 8.35 7485.35 1562.38 384629.42 00:29:06.321 ======================================================== 00:29:06.321 Total : 18976.10 9.27 11379.32 1562.38 1031446.60 00:29:06.321 00:29:06.321 08:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.321 08:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:06.321 08:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:06.580 true 00:29:06.580 08:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529690 00:29:06.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1529690) - No such process 00:29:06.580 08:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1529690 00:29:06.580 08:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.580 08:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:06.840 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:06.840 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:06.840 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:06.840 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:06.840 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:07.099 null0 00:29:07.099 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:07.099 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:07.099 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:07.358 null1 00:29:07.358 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:07.358 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:07.358 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:07.358 null2 00:29:07.358 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:07.358 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:07.358 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:07.618 null3 00:29:07.618 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:07.618 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:07.618 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:07.877 null4 00:29:07.877 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:07.877 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:07.877 08:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:08.137 null5 00:29:08.137 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:08.137 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:08.137 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:08.137 null6 00:29:08.137 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:08.137 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:08.137 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:08.397 null7 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:08.397 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1535160 1535163 1535166 1535169 1535171 1535175 1535178 1535181 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.398 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:08.658 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:08.658 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.658 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:08.658 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:08.658 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:08.658 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:08.658 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:08.658 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.917 08:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:08.918 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.918 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.918 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:08.918 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:08.918 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.918 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.180 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:09.439 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:09.439 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.439 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:09.439 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:09.439 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:09.439 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:09.439 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:09.439 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.698 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:09.957 08:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.957 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:10.217 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.477 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:10.736 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:10.736 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:10.736 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:10.736 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:10.737 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:10.737 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.737 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:10.737 08:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:10.996 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.256 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.257 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:11.257 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.257 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.257 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:11.257 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.257 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.257 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:11.516 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:11.516 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:11.516 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:11.516 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:11.516 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:11.516 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:11.516 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:11.516 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:11.776 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:12.035 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:12.035 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:12.035 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:12.035 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:12.035 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:12.035 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:12.035 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:12.035 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.035 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.035 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.035 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.036 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:12.295 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:12.554 rmmod nvme_tcp 00:29:12.554 rmmod nvme_fabrics 00:29:12.554 rmmod nvme_keyring 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1529365 ']' 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1529365 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1529365 ']' 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1529365 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.554 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1529365 00:29:12.813 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:12.813 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:12.813 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1529365' 00:29:12.813 killing process with pid 1529365 00:29:12.813 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1529365 00:29:12.813 08:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1529365 00:29:12.813 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:12.813 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:12.813 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:12.813 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:12.813 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:12.813 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:12.813 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:12.813 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:12.813 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:12.813 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.813 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.814 08:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:15.349 00:29:15.349 real 0m47.337s 00:29:15.349 user 2m59.033s 00:29:15.349 sys 0m19.635s 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:15.349 ************************************ 00:29:15.349 END TEST nvmf_ns_hotplug_stress 00:29:15.349 ************************************ 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:15.349 ************************************ 00:29:15.349 START TEST nvmf_delete_subsystem 00:29:15.349 ************************************ 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:15.349 * Looking for test storage... 00:29:15.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.349 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.350 --rc genhtml_branch_coverage=1 00:29:15.350 --rc genhtml_function_coverage=1 00:29:15.350 --rc genhtml_legend=1 00:29:15.350 --rc geninfo_all_blocks=1 00:29:15.350 --rc geninfo_unexecuted_blocks=1 00:29:15.350 00:29:15.350 ' 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.350 --rc genhtml_branch_coverage=1 00:29:15.350 --rc genhtml_function_coverage=1 00:29:15.350 --rc genhtml_legend=1 00:29:15.350 --rc geninfo_all_blocks=1 00:29:15.350 --rc geninfo_unexecuted_blocks=1 00:29:15.350 00:29:15.350 ' 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.350 --rc genhtml_branch_coverage=1 00:29:15.350 --rc genhtml_function_coverage=1 00:29:15.350 --rc genhtml_legend=1 00:29:15.350 --rc geninfo_all_blocks=1 00:29:15.350 --rc geninfo_unexecuted_blocks=1 00:29:15.350 00:29:15.350 ' 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.350 --rc genhtml_branch_coverage=1 00:29:15.350 --rc genhtml_function_coverage=1 00:29:15.350 --rc genhtml_legend=1 00:29:15.350 --rc geninfo_all_blocks=1 00:29:15.350 --rc geninfo_unexecuted_blocks=1 00:29:15.350 00:29:15.350 ' 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:15.350 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:15.351 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.351 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:15.351 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:15.351 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:15.351 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.351 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.351 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.351 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:15.351 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:15.351 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:15.351 08:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:20.624 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:20.625 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:20.625 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:20.625 Found net devices under 0000:86:00.0: cvl_0_0 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:20.625 Found net devices under 0000:86:00.1: cvl_0_1 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:20.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:20.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:29:20.625 00:29:20.625 --- 10.0.0.2 ping statistics --- 00:29:20.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.625 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:20.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:20.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:29:20.625 00:29:20.625 --- 10.0.0.1 ping statistics --- 00:29:20.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.625 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1539327 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1539327 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1539327 ']' 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.625 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.626 08:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:20.885 [2024-11-28 08:28:02.897591] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:20.885 [2024-11-28 08:28:02.898589] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:29:20.885 [2024-11-28 08:28:02.898630] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.885 [2024-11-28 08:28:02.966920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:20.885 [2024-11-28 08:28:03.008183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.885 [2024-11-28 08:28:03.008221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.885 [2024-11-28 08:28:03.008228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.885 [2024-11-28 08:28:03.008236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.885 [2024-11-28 08:28:03.008245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.885 [2024-11-28 08:28:03.009419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.885 [2024-11-28 08:28:03.009423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.885 [2024-11-28 08:28:03.079032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:20.885 [2024-11-28 08:28:03.079265] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:20.885 [2024-11-28 08:28:03.079324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:20.885 [2024-11-28 08:28:03.146020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.885 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:21.144 [2024-11-28 08:28:03.162282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:21.144 NULL1 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:21.144 Delay0 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1539432 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:21.144 08:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:21.144 [2024-11-28 08:28:03.244750] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:23.047 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:23.047 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.047 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 [2024-11-28 08:28:05.293211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa85c00d020 is same with the state(6) to be set 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 starting I/O failed: -6 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Write completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.047 Read completed with error (sct=0, sc=8) 00:29:23.048 Write completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Write completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Write completed with error (sct=0, sc=8) 00:29:23.048 starting I/O failed: -6 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Write completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Write completed with error (sct=0, sc=8) 00:29:23.048 starting I/O failed: -6 00:29:23.048 Write completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Write completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 starting I/O failed: -6 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 [2024-11-28 08:28:05.293865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1680 is same with the state(6) to be set 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:23.048 Write completed with error (sct=0, sc=8) 00:29:23.048 Write completed with error (sct=0, sc=8) 00:29:23.048 Read completed with error (sct=0, sc=8) 00:29:24.426 [2024-11-28 08:28:06.257871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a29b0 is same with the state(6) to be set 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 [2024-11-28 08:28:06.295363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa85c00d350 is same with the state(6) to be set 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 [2024-11-28 08:28:06.295680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a14a0 is same with the state(6) to be set 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 [2024-11-28 08:28:06.295942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1860 is same with the state(6) to be set 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Read completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.426 Write completed with error (sct=0, sc=8) 00:29:24.427 Read completed with error (sct=0, sc=8) 00:29:24.427 Write completed with error (sct=0, sc=8) 00:29:24.427 Read completed with error (sct=0, sc=8) 00:29:24.427 [2024-11-28 08:28:06.296523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a12c0 is same with the state(6) to be set 00:29:24.427 Initializing NVMe Controllers 00:29:24.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.427 Controller IO queue size 128, less than required. 00:29:24.427 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:24.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:24.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:24.427 Initialization complete. Launching workers. 00:29:24.427 ======================================================== 00:29:24.427 Latency(us) 00:29:24.427 Device Information : IOPS MiB/s Average min max 00:29:24.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.04 0.09 949695.78 748.75 1012178.45 00:29:24.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.30 0.08 878008.75 359.77 1011698.56 00:29:24.427 ======================================================== 00:29:24.427 Total : 344.33 0.17 917364.31 359.77 1012178.45 00:29:24.427 00:29:24.427 [2024-11-28 08:28:06.297141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a29b0 (9): Bad file descriptor 00:29:24.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:24.427 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.427 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:24.427 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1539432 00:29:24.427 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:24.686 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:24.686 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1539432 00:29:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1539432) - No such process 00:29:24.686 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1539432 00:29:24.686 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1539432 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1539432 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:24.687 [2024-11-28 08:28:06.818230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1540036 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1540036 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:24.687 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:24.687 [2024-11-28 08:28:06.885467] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:25.254 08:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:25.254 08:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1540036 00:29:25.254 08:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:25.822 08:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:25.822 08:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1540036 00:29:25.822 08:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:26.081 08:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:26.081 08:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1540036 00:29:26.081 08:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:26.650 08:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:26.650 08:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1540036 00:29:26.650 08:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:27.218 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:27.218 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1540036 00:29:27.218 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:27.786 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:27.786 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1540036 00:29:27.786 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:28.045 Initializing NVMe Controllers 00:29:28.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:28.045 Controller IO queue size 128, less than required. 00:29:28.045 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:28.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:28.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:28.045 Initialization complete. Launching workers. 00:29:28.045 ======================================================== 00:29:28.045 Latency(us) 00:29:28.045 Device Information : IOPS MiB/s Average min max 00:29:28.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003890.54 1000197.47 1042852.27 00:29:28.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005538.04 1000139.92 1041251.87 00:29:28.045 ======================================================== 00:29:28.045 Total : 256.00 0.12 1004714.29 1000139.92 1042852.27 00:29:28.045 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1540036 00:29:28.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1540036) - No such process 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1540036 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:28.306 rmmod nvme_tcp 00:29:28.306 rmmod nvme_fabrics 00:29:28.306 rmmod nvme_keyring 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1539327 ']' 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1539327 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1539327 ']' 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1539327 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1539327 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1539327' 00:29:28.306 killing process with pid 1539327 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1539327 00:29:28.306 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1539327 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.566 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.472 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:30.472 00:29:30.472 real 0m15.544s 00:29:30.472 user 0m25.718s 00:29:30.472 sys 0m5.686s 00:29:30.472 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.472 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:30.472 ************************************ 00:29:30.472 END TEST nvmf_delete_subsystem 00:29:30.472 ************************************ 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:30.733 ************************************ 00:29:30.733 START TEST nvmf_host_management 00:29:30.733 ************************************ 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:30.733 * Looking for test storage... 00:29:30.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:30.733 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:30.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.733 --rc genhtml_branch_coverage=1 00:29:30.734 --rc genhtml_function_coverage=1 00:29:30.734 --rc genhtml_legend=1 00:29:30.734 --rc geninfo_all_blocks=1 00:29:30.734 --rc geninfo_unexecuted_blocks=1 00:29:30.734 00:29:30.734 ' 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:30.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.734 --rc genhtml_branch_coverage=1 00:29:30.734 --rc genhtml_function_coverage=1 00:29:30.734 --rc genhtml_legend=1 00:29:30.734 --rc geninfo_all_blocks=1 00:29:30.734 --rc geninfo_unexecuted_blocks=1 00:29:30.734 00:29:30.734 ' 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:30.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.734 --rc genhtml_branch_coverage=1 00:29:30.734 --rc genhtml_function_coverage=1 00:29:30.734 --rc genhtml_legend=1 00:29:30.734 --rc geninfo_all_blocks=1 00:29:30.734 --rc geninfo_unexecuted_blocks=1 00:29:30.734 00:29:30.734 ' 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:30.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.734 --rc genhtml_branch_coverage=1 00:29:30.734 --rc genhtml_function_coverage=1 00:29:30.734 --rc genhtml_legend=1 00:29:30.734 --rc geninfo_all_blocks=1 00:29:30.734 --rc geninfo_unexecuted_blocks=1 00:29:30.734 00:29:30.734 ' 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:30.734 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.735 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:36.008 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:36.008 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:36.008 Found net devices under 0000:86:00.0: cvl_0_0 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.008 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:36.009 Found net devices under 0000:86:00.1: cvl_0_1 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.009 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:29:36.268 00:29:36.268 --- 10.0.0.2 ping statistics --- 00:29:36.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.268 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:29:36.268 00:29:36.268 --- 10.0.0.1 ping statistics --- 00:29:36.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.268 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1544030 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1544030 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1544030 ']' 00:29:36.268 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.269 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.269 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.269 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.269 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.269 [2024-11-28 08:28:18.394943] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:36.269 [2024-11-28 08:28:18.395864] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:29:36.269 [2024-11-28 08:28:18.395897] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.269 [2024-11-28 08:28:18.461891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:36.269 [2024-11-28 08:28:18.505800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.269 [2024-11-28 08:28:18.505837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.269 [2024-11-28 08:28:18.505844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.269 [2024-11-28 08:28:18.505850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.269 [2024-11-28 08:28:18.505856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.269 [2024-11-28 08:28:18.507479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.269 [2024-11-28 08:28:18.507564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.269 [2024-11-28 08:28:18.507674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.269 [2024-11-28 08:28:18.507675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:36.529 [2024-11-28 08:28:18.576102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:36.529 [2024-11-28 08:28:18.576246] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:36.529 [2024-11-28 08:28:18.576681] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:36.529 [2024-11-28 08:28:18.576713] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:36.529 [2024-11-28 08:28:18.576863] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.529 [2024-11-28 08:28:18.644393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.529 Malloc0 00:29:36.529 [2024-11-28 08:28:18.720303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1544074 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1544074 /var/tmp/bdevperf.sock 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1544074 ']' 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:36.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:36.529 { 00:29:36.529 "params": { 00:29:36.529 "name": "Nvme$subsystem", 00:29:36.529 "trtype": "$TEST_TRANSPORT", 00:29:36.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.529 "adrfam": "ipv4", 00:29:36.529 "trsvcid": "$NVMF_PORT", 00:29:36.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.529 "hdgst": ${hdgst:-false}, 00:29:36.529 "ddgst": ${ddgst:-false} 00:29:36.529 }, 00:29:36.529 "method": "bdev_nvme_attach_controller" 00:29:36.529 } 00:29:36.529 EOF 00:29:36.529 )") 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:36.529 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:36.529 "params": { 00:29:36.529 "name": "Nvme0", 00:29:36.529 "trtype": "tcp", 00:29:36.529 "traddr": "10.0.0.2", 00:29:36.529 "adrfam": "ipv4", 00:29:36.529 "trsvcid": "4420", 00:29:36.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:36.529 "hdgst": false, 00:29:36.529 "ddgst": false 00:29:36.529 }, 00:29:36.529 "method": "bdev_nvme_attach_controller" 00:29:36.529 }' 00:29:36.789 [2024-11-28 08:28:18.816629] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:29:36.789 [2024-11-28 08:28:18.816677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544074 ] 00:29:36.789 [2024-11-28 08:28:18.882105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.789 [2024-11-28 08:28:18.923745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.048 Running I/O for 10 seconds... 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:29:37.048 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.309 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:37.309 [2024-11-28 08:28:19.520097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.309 [2024-11-28 08:28:19.520425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.520519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1d70 is same with the state(6) to be set 00:29:37.310 [2024-11-28 08:28:19.521759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.310 [2024-11-28 08:28:19.521792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.521802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.310 [2024-11-28 08:28:19.521809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.521817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.310 [2024-11-28 08:28:19.521824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.521831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.310 [2024-11-28 08:28:19.521838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.521845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf510 is same with the state(6) to be set 00:29:37.310 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.310 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:37.310 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.310 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:37.310 [2024-11-28 08:28:19.530557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.310 [2024-11-28 08:28:19.530963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.310 [2024-11-28 08:28:19.530969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.530980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.530987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.530995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.531534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.311 [2024-11-28 08:28:19.531543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.311 [2024-11-28 08:28:19.532494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:37.311 task offset: 97792 on job bdev=Nvme0n1 fails 00:29:37.311 00:29:37.311 Latency(us) 00:29:37.311 [2024-11-28T07:28:19.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.311 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:37.312 Job: Nvme0n1 ended in about 0.41 seconds with error 00:29:37.312 Verification LBA range: start 0x0 length 0x400 00:29:37.312 Nvme0n1 : 0.41 1860.73 116.30 155.87 0.00 30887.38 1332.09 27696.08 00:29:37.312 [2024-11-28T07:28:19.581Z] =================================================================================================================== 00:29:37.312 [2024-11-28T07:28:19.581Z] Total : 1860.73 116.30 155.87 0.00 30887.38 1332.09 27696.08 00:29:37.312 [2024-11-28 08:28:19.534885] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:37.312 [2024-11-28 08:28:19.534905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbf510 (9): Bad file descriptor 00:29:37.312 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.312 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:37.312 [2024-11-28 08:28:19.538150] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1544074 00:29:38.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1544074) - No such process 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:38.689 { 00:29:38.689 "params": { 00:29:38.689 "name": "Nvme$subsystem", 00:29:38.689 "trtype": "$TEST_TRANSPORT", 00:29:38.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.689 "adrfam": "ipv4", 00:29:38.689 "trsvcid": "$NVMF_PORT", 00:29:38.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.689 "hdgst": ${hdgst:-false}, 00:29:38.689 "ddgst": ${ddgst:-false} 00:29:38.689 }, 00:29:38.689 "method": "bdev_nvme_attach_controller" 00:29:38.689 } 00:29:38.689 EOF 00:29:38.689 )") 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:38.689 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:38.689 "params": { 00:29:38.689 "name": "Nvme0", 00:29:38.689 "trtype": "tcp", 00:29:38.689 "traddr": "10.0.0.2", 00:29:38.689 "adrfam": "ipv4", 00:29:38.689 "trsvcid": "4420", 00:29:38.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:38.689 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:38.689 "hdgst": false, 00:29:38.689 "ddgst": false 00:29:38.689 }, 00:29:38.689 "method": "bdev_nvme_attach_controller" 00:29:38.689 }' 00:29:38.689 [2024-11-28 08:28:20.592939] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:29:38.689 [2024-11-28 08:28:20.592998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544466 ] 00:29:38.689 [2024-11-28 08:28:20.655889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.689 [2024-11-28 08:28:20.697502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.689 Running I/O for 1 seconds... 00:29:39.627 1920.00 IOPS, 120.00 MiB/s 00:29:39.627 Latency(us) 00:29:39.627 [2024-11-28T07:28:21.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.627 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:39.627 Verification LBA range: start 0x0 length 0x400 00:29:39.627 Nvme0n1 : 1.01 1958.08 122.38 0.00 0.00 32174.76 4843.97 27924.03 00:29:39.627 [2024-11-28T07:28:21.896Z] =================================================================================================================== 00:29:39.627 [2024-11-28T07:28:21.896Z] Total : 1958.08 122.38 0.00 0.00 32174.76 4843.97 27924.03 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:39.886 rmmod nvme_tcp 00:29:39.886 rmmod nvme_fabrics 00:29:39.886 rmmod nvme_keyring 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1544030 ']' 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1544030 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1544030 ']' 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1544030 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.886 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1544030 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1544030' 00:29:40.146 killing process with pid 1544030 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1544030 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1544030 00:29:40.146 [2024-11-28 08:28:22.319184] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:40.146 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.147 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.147 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.681 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:42.681 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:42.681 00:29:42.681 real 0m11.637s 00:29:42.681 user 0m17.226s 00:29:42.681 sys 0m5.828s 00:29:42.681 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.681 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:42.682 ************************************ 00:29:42.682 END TEST nvmf_host_management 00:29:42.682 ************************************ 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:42.682 ************************************ 00:29:42.682 START TEST nvmf_lvol 00:29:42.682 ************************************ 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:42.682 * Looking for test storage... 00:29:42.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:42.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.682 --rc genhtml_branch_coverage=1 00:29:42.682 --rc genhtml_function_coverage=1 00:29:42.682 --rc genhtml_legend=1 00:29:42.682 --rc geninfo_all_blocks=1 00:29:42.682 --rc geninfo_unexecuted_blocks=1 00:29:42.682 00:29:42.682 ' 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:42.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.682 --rc genhtml_branch_coverage=1 00:29:42.682 --rc genhtml_function_coverage=1 00:29:42.682 --rc genhtml_legend=1 00:29:42.682 --rc geninfo_all_blocks=1 00:29:42.682 --rc geninfo_unexecuted_blocks=1 00:29:42.682 00:29:42.682 ' 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:42.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.682 --rc genhtml_branch_coverage=1 00:29:42.682 --rc genhtml_function_coverage=1 00:29:42.682 --rc genhtml_legend=1 00:29:42.682 --rc geninfo_all_blocks=1 00:29:42.682 --rc geninfo_unexecuted_blocks=1 00:29:42.682 00:29:42.682 ' 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:42.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.682 --rc genhtml_branch_coverage=1 00:29:42.682 --rc genhtml_function_coverage=1 00:29:42.682 --rc genhtml_legend=1 00:29:42.682 --rc geninfo_all_blocks=1 00:29:42.682 --rc geninfo_unexecuted_blocks=1 00:29:42.682 00:29:42.682 ' 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.682 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.683 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:47.957 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.957 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:47.958 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:47.958 Found net devices under 0000:86:00.0: cvl_0_0 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:47.958 Found net devices under 0000:86:00.1: cvl_0_1 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.958 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:29:47.958 00:29:47.958 --- 10.0.0.2 ping statistics --- 00:29:47.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.958 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:29:47.958 00:29:47.958 --- 10.0.0.1 ping statistics --- 00:29:47.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.958 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1548070 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1548070 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1548070 ']' 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.958 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:47.958 [2024-11-28 08:28:30.182214] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:47.958 [2024-11-28 08:28:30.183177] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:29:47.958 [2024-11-28 08:28:30.183211] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.217 [2024-11-28 08:28:30.250632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:48.217 [2024-11-28 08:28:30.294274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.217 [2024-11-28 08:28:30.294311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.217 [2024-11-28 08:28:30.294318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.217 [2024-11-28 08:28:30.294324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.217 [2024-11-28 08:28:30.294330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.217 [2024-11-28 08:28:30.295740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.217 [2024-11-28 08:28:30.295758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.217 [2024-11-28 08:28:30.295761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.217 [2024-11-28 08:28:30.365975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:48.217 [2024-11-28 08:28:30.366064] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:48.217 [2024-11-28 08:28:30.366087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:48.217 [2024-11-28 08:28:30.366258] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:48.217 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.217 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:48.217 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.217 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.217 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:48.217 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.217 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:48.476 [2024-11-28 08:28:30.604483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.476 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:48.734 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:48.734 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:48.993 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:48.993 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:48.993 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:49.251 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5ee417b2-dea8-4ead-a3d5-0c2b2c54e7fb 00:29:49.251 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5ee417b2-dea8-4ead-a3d5-0c2b2c54e7fb lvol 20 00:29:49.511 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5d6b6109-603c-4873-a9f1-1334f06edf06 00:29:49.511 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:49.770 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5d6b6109-603c-4873-a9f1-1334f06edf06 00:29:49.770 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:50.029 [2024-11-28 08:28:32.168454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.029 08:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:50.288 08:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1548552 00:29:50.288 08:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:50.288 08:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:51.225 08:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5d6b6109-603c-4873-a9f1-1334f06edf06 MY_SNAPSHOT 00:29:51.483 08:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9b45832f-dfd6-4951-b916-d053e53623a5 00:29:51.483 08:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5d6b6109-603c-4873-a9f1-1334f06edf06 30 00:29:51.742 08:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9b45832f-dfd6-4951-b916-d053e53623a5 MY_CLONE 00:29:52.001 08:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9dcc24da-348b-4e48-9338-19a9f5049d2c 00:29:52.001 08:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9dcc24da-348b-4e48-9338-19a9f5049d2c 00:29:52.569 08:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1548552 00:30:00.709 Initializing NVMe Controllers 00:30:00.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:00.709 Controller IO queue size 128, less than required. 00:30:00.709 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:00.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:00.709 Initialization complete. Launching workers. 00:30:00.709 ======================================================== 00:30:00.709 Latency(us) 00:30:00.709 Device Information : IOPS MiB/s Average min max 00:30:00.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12233.72 47.79 10463.86 1639.27 73470.69 00:30:00.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12094.23 47.24 10586.52 2107.83 60231.69 00:30:00.709 ======================================================== 00:30:00.709 Total : 24327.95 95.03 10524.84 1639.27 73470.69 00:30:00.709 00:30:00.709 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:00.709 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5d6b6109-603c-4873-a9f1-1334f06edf06 00:30:00.968 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ee417b2-dea8-4ead-a3d5-0c2b2c54e7fb 00:30:01.227 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:01.227 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:01.227 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:01.227 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.227 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:01.227 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:01.227 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:01.227 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.227 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:01.227 rmmod nvme_tcp 00:30:01.227 rmmod nvme_fabrics 00:30:01.227 rmmod nvme_keyring 00:30:01.228 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.228 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:01.228 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:01.228 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1548070 ']' 00:30:01.228 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1548070 00:30:01.228 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1548070 ']' 00:30:01.228 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1548070 00:30:01.228 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:01.228 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.228 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1548070 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1548070' 00:30:01.487 killing process with pid 1548070 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1548070 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1548070 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:01.487 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.488 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.488 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.023 08:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:04.023 00:30:04.023 real 0m21.297s 00:30:04.023 user 0m55.328s 00:30:04.023 sys 0m9.612s 00:30:04.023 08:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:04.023 08:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:04.023 ************************************ 00:30:04.023 END TEST nvmf_lvol 00:30:04.023 ************************************ 00:30:04.023 08:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:04.023 08:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:04.023 08:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:04.023 08:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:04.023 ************************************ 00:30:04.023 START TEST nvmf_lvs_grow 00:30:04.023 ************************************ 00:30:04.023 08:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:04.023 * Looking for test storage... 00:30:04.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:04.023 08:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:04.023 08:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:30:04.023 08:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:04.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.023 --rc genhtml_branch_coverage=1 00:30:04.023 --rc genhtml_function_coverage=1 00:30:04.023 --rc genhtml_legend=1 00:30:04.023 --rc geninfo_all_blocks=1 00:30:04.023 --rc geninfo_unexecuted_blocks=1 00:30:04.023 00:30:04.023 ' 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:04.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.023 --rc genhtml_branch_coverage=1 00:30:04.023 --rc genhtml_function_coverage=1 00:30:04.023 --rc genhtml_legend=1 00:30:04.023 --rc geninfo_all_blocks=1 00:30:04.023 --rc geninfo_unexecuted_blocks=1 00:30:04.023 00:30:04.023 ' 00:30:04.023 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:04.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.023 --rc genhtml_branch_coverage=1 00:30:04.023 --rc genhtml_function_coverage=1 00:30:04.024 --rc genhtml_legend=1 00:30:04.024 --rc geninfo_all_blocks=1 00:30:04.024 --rc geninfo_unexecuted_blocks=1 00:30:04.024 00:30:04.024 ' 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:04.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.024 --rc genhtml_branch_coverage=1 00:30:04.024 --rc genhtml_function_coverage=1 00:30:04.024 --rc genhtml_legend=1 00:30:04.024 --rc geninfo_all_blocks=1 00:30:04.024 --rc geninfo_unexecuted_blocks=1 00:30:04.024 00:30:04.024 ' 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:04.024 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:09.341 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:09.341 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:09.341 Found net devices under 0000:86:00.0: cvl_0_0 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:09.341 Found net devices under 0000:86:00.1: cvl_0_1 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.341 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:09.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:30:09.342 00:30:09.342 --- 10.0.0.2 ping statistics --- 00:30:09.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.342 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:30:09.342 00:30:09.342 --- 10.0.0.1 ping statistics --- 00:30:09.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.342 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1553689 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1553689 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1553689 ']' 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.342 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:09.342 [2024-11-28 08:28:51.510859] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:09.342 [2024-11-28 08:28:51.511802] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:30:09.342 [2024-11-28 08:28:51.511835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.342 [2024-11-28 08:28:51.576959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.667 [2024-11-28 08:28:51.624385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.667 [2024-11-28 08:28:51.624418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.667 [2024-11-28 08:28:51.624426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.667 [2024-11-28 08:28:51.624433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.667 [2024-11-28 08:28:51.624438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.667 [2024-11-28 08:28:51.624969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.667 [2024-11-28 08:28:51.693888] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:09.667 [2024-11-28 08:28:51.694127] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:09.667 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.667 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:09.667 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:09.667 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:09.667 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:09.667 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.667 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:09.996 [2024-11-28 08:28:51.933397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:09.996 ************************************ 00:30:09.996 START TEST lvs_grow_clean 00:30:09.996 ************************************ 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:09.996 08:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:09.996 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:09.996 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:10.286 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:10.286 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:10.286 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:10.571 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:10.571 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:10.571 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 lvol 150 00:30:10.571 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b4b021b3-f663-4f7e-ab3a-fb2eff74dbc3 00:30:10.571 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:10.571 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:10.904 [2024-11-28 08:28:52.961349] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:10.904 [2024-11-28 08:28:52.961474] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:10.904 true 00:30:10.904 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:10.904 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:10.904 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:10.904 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:11.227 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b4b021b3-f663-4f7e-ab3a-fb2eff74dbc3 00:30:11.486 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:11.486 [2024-11-28 08:28:53.725820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.486 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:11.745 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1554195 00:30:11.745 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:11.745 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:11.745 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1554195 /var/tmp/bdevperf.sock 00:30:11.745 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1554195 ']' 00:30:11.745 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:11.745 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.745 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:11.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:11.745 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.745 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:11.745 [2024-11-28 08:28:53.981761] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:30:11.745 [2024-11-28 08:28:53.981809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1554195 ] 00:30:12.004 [2024-11-28 08:28:54.044337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.004 [2024-11-28 08:28:54.088606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.004 08:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.004 08:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:12.004 08:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:12.262 Nvme0n1 00:30:12.262 08:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:12.521 [ 00:30:12.521 { 00:30:12.521 "name": "Nvme0n1", 00:30:12.521 "aliases": [ 00:30:12.521 "b4b021b3-f663-4f7e-ab3a-fb2eff74dbc3" 00:30:12.521 ], 00:30:12.521 "product_name": "NVMe disk", 00:30:12.521 "block_size": 4096, 00:30:12.521 "num_blocks": 38912, 00:30:12.521 "uuid": "b4b021b3-f663-4f7e-ab3a-fb2eff74dbc3", 00:30:12.521 "numa_id": 1, 00:30:12.521 "assigned_rate_limits": { 00:30:12.521 "rw_ios_per_sec": 0, 00:30:12.521 "rw_mbytes_per_sec": 0, 00:30:12.521 "r_mbytes_per_sec": 0, 00:30:12.521 "w_mbytes_per_sec": 0 00:30:12.521 }, 00:30:12.521 "claimed": false, 00:30:12.521 "zoned": false, 00:30:12.521 "supported_io_types": { 00:30:12.521 "read": true, 00:30:12.521 "write": true, 00:30:12.521 "unmap": true, 00:30:12.521 "flush": true, 00:30:12.521 "reset": true, 00:30:12.521 "nvme_admin": true, 00:30:12.521 "nvme_io": true, 00:30:12.521 "nvme_io_md": false, 00:30:12.521 "write_zeroes": true, 00:30:12.521 "zcopy": false, 00:30:12.521 "get_zone_info": false, 00:30:12.521 "zone_management": false, 00:30:12.521 "zone_append": false, 00:30:12.521 "compare": true, 00:30:12.521 "compare_and_write": true, 00:30:12.521 "abort": true, 00:30:12.521 "seek_hole": false, 00:30:12.521 "seek_data": false, 00:30:12.521 "copy": true, 00:30:12.521 "nvme_iov_md": false 00:30:12.521 }, 00:30:12.521 "memory_domains": [ 00:30:12.521 { 00:30:12.521 "dma_device_id": "system", 00:30:12.521 "dma_device_type": 1 00:30:12.521 } 00:30:12.521 ], 00:30:12.521 "driver_specific": { 00:30:12.521 "nvme": [ 00:30:12.521 { 00:30:12.521 "trid": { 00:30:12.521 "trtype": "TCP", 00:30:12.521 "adrfam": "IPv4", 00:30:12.521 "traddr": "10.0.0.2", 00:30:12.521 "trsvcid": "4420", 00:30:12.521 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:12.522 }, 00:30:12.522 "ctrlr_data": { 00:30:12.522 "cntlid": 1, 00:30:12.522 "vendor_id": "0x8086", 00:30:12.522 "model_number": "SPDK bdev Controller", 00:30:12.522 "serial_number": "SPDK0", 00:30:12.522 "firmware_revision": "25.01", 00:30:12.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:12.522 "oacs": { 00:30:12.522 "security": 0, 00:30:12.522 "format": 0, 00:30:12.522 "firmware": 0, 00:30:12.522 "ns_manage": 0 00:30:12.522 }, 00:30:12.522 "multi_ctrlr": true, 00:30:12.522 "ana_reporting": false 00:30:12.522 }, 00:30:12.522 "vs": { 00:30:12.522 "nvme_version": "1.3" 00:30:12.522 }, 00:30:12.522 "ns_data": { 00:30:12.522 "id": 1, 00:30:12.522 "can_share": true 00:30:12.522 } 00:30:12.522 } 00:30:12.522 ], 00:30:12.522 "mp_policy": "active_passive" 00:30:12.522 } 00:30:12.522 } 00:30:12.522 ] 00:30:12.522 08:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1554297 00:30:12.522 08:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:12.522 08:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:12.522 Running I/O for 10 seconds... 00:30:13.457 Latency(us) 00:30:13.457 [2024-11-28T07:28:55.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.457 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:30:13.457 [2024-11-28T07:28:55.727Z] =================================================================================================================== 00:30:13.458 [2024-11-28T07:28:55.727Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:30:13.458 00:30:14.394 08:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:14.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.653 Nvme0n1 : 2.00 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:30:14.653 [2024-11-28T07:28:56.922Z] =================================================================================================================== 00:30:14.653 [2024-11-28T07:28:56.922Z] Total : 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:30:14.653 00:30:14.653 true 00:30:14.653 08:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:14.653 08:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:14.912 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:14.912 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:14.912 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1554297 00:30:15.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.480 Nvme0n1 : 3.00 22436.67 87.64 0.00 0.00 0.00 0.00 0.00 00:30:15.480 [2024-11-28T07:28:57.749Z] =================================================================================================================== 00:30:15.480 [2024-11-28T07:28:57.749Z] Total : 22436.67 87.64 0.00 0.00 0.00 0.00 0.00 00:30:15.480 00:30:16.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.857 Nvme0n1 : 4.00 22510.75 87.93 0.00 0.00 0.00 0.00 0.00 00:30:16.857 [2024-11-28T07:28:59.126Z] =================================================================================================================== 00:30:16.857 [2024-11-28T07:28:59.126Z] Total : 22510.75 87.93 0.00 0.00 0.00 0.00 0.00 00:30:16.857 00:30:17.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:17.794 Nvme0n1 : 5.00 22555.20 88.11 0.00 0.00 0.00 0.00 0.00 00:30:17.794 [2024-11-28T07:29:00.063Z] =================================================================================================================== 00:30:17.794 [2024-11-28T07:29:00.063Z] Total : 22555.20 88.11 0.00 0.00 0.00 0.00 0.00 00:30:17.794 00:30:18.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.731 Nvme0n1 : 6.00 22563.67 88.14 0.00 0.00 0.00 0.00 0.00 00:30:18.731 [2024-11-28T07:29:01.000Z] =================================================================================================================== 00:30:18.731 [2024-11-28T07:29:01.000Z] Total : 22563.67 88.14 0.00 0.00 0.00 0.00 0.00 00:30:18.731 00:30:19.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.667 Nvme0n1 : 7.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:30:19.667 [2024-11-28T07:29:01.936Z] =================================================================================================================== 00:30:19.667 [2024-11-28T07:29:01.936Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:30:19.667 00:30:20.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.604 Nvme0n1 : 8.00 22621.88 88.37 0.00 0.00 0.00 0.00 0.00 00:30:20.604 [2024-11-28T07:29:02.873Z] =================================================================================================================== 00:30:20.604 [2024-11-28T07:29:02.873Z] Total : 22621.88 88.37 0.00 0.00 0.00 0.00 0.00 00:30:20.604 00:30:21.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.539 Nvme0n1 : 9.00 22641.33 88.44 0.00 0.00 0.00 0.00 0.00 00:30:21.539 [2024-11-28T07:29:03.808Z] =================================================================================================================== 00:30:21.539 [2024-11-28T07:29:03.808Z] Total : 22641.33 88.44 0.00 0.00 0.00 0.00 0.00 00:30:21.539 00:30:22.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.916 Nvme0n1 : 10.00 22661.70 88.52 0.00 0.00 0.00 0.00 0.00 00:30:22.916 [2024-11-28T07:29:05.185Z] =================================================================================================================== 00:30:22.916 [2024-11-28T07:29:05.185Z] Total : 22661.70 88.52 0.00 0.00 0.00 0.00 0.00 00:30:22.916 00:30:22.916 00:30:22.916 Latency(us) 00:30:22.916 [2024-11-28T07:29:05.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.916 Nvme0n1 : 10.00 22663.36 88.53 0.00 0.00 5644.77 3219.81 15044.79 00:30:22.916 [2024-11-28T07:29:05.185Z] =================================================================================================================== 00:30:22.916 [2024-11-28T07:29:05.185Z] Total : 22663.36 88.53 0.00 0.00 5644.77 3219.81 15044.79 00:30:22.916 { 00:30:22.916 "results": [ 00:30:22.916 { 00:30:22.916 "job": "Nvme0n1", 00:30:22.916 "core_mask": "0x2", 00:30:22.916 "workload": "randwrite", 00:30:22.916 "status": "finished", 00:30:22.916 "queue_depth": 128, 00:30:22.916 "io_size": 4096, 00:30:22.916 "runtime": 10.004917, 00:30:22.916 "iops": 22663.35642764453, 00:30:22.916 "mibps": 88.52873604548644, 00:30:22.916 "io_failed": 0, 00:30:22.916 "io_timeout": 0, 00:30:22.916 "avg_latency_us": 5644.7655663141995, 00:30:22.916 "min_latency_us": 3219.8121739130434, 00:30:22.916 "max_latency_us": 15044.786086956521 00:30:22.916 } 00:30:22.916 ], 00:30:22.916 "core_count": 1 00:30:22.916 } 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1554195 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1554195 ']' 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1554195 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1554195 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1554195' 00:30:22.916 killing process with pid 1554195 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1554195 00:30:22.916 Received shutdown signal, test time was about 10.000000 seconds 00:30:22.916 00:30:22.916 Latency(us) 00:30:22.916 [2024-11-28T07:29:05.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.916 [2024-11-28T07:29:05.185Z] =================================================================================================================== 00:30:22.916 [2024-11-28T07:29:05.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1554195 00:30:22.916 08:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:22.916 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:23.174 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:23.174 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:23.431 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:23.431 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:23.431 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:23.689 [2024-11-28 08:29:05.757371] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:23.689 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:23.948 request: 00:30:23.948 { 00:30:23.948 "uuid": "c9afb7d9-655b-4a12-90d8-cc8a691ff430", 00:30:23.948 "method": "bdev_lvol_get_lvstores", 00:30:23.948 "req_id": 1 00:30:23.948 } 00:30:23.948 Got JSON-RPC error response 00:30:23.948 response: 00:30:23.948 { 00:30:23.948 "code": -19, 00:30:23.948 "message": "No such device" 00:30:23.948 } 00:30:23.948 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:23.948 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:23.948 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:23.948 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:23.948 08:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:23.948 aio_bdev 00:30:23.948 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b4b021b3-f663-4f7e-ab3a-fb2eff74dbc3 00:30:23.948 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b4b021b3-f663-4f7e-ab3a-fb2eff74dbc3 00:30:23.948 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:23.948 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:23.948 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:23.948 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:23.948 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:24.207 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b4b021b3-f663-4f7e-ab3a-fb2eff74dbc3 -t 2000 00:30:24.466 [ 00:30:24.466 { 00:30:24.466 "name": "b4b021b3-f663-4f7e-ab3a-fb2eff74dbc3", 00:30:24.466 "aliases": [ 00:30:24.466 "lvs/lvol" 00:30:24.466 ], 00:30:24.466 "product_name": "Logical Volume", 00:30:24.466 "block_size": 4096, 00:30:24.466 "num_blocks": 38912, 00:30:24.466 "uuid": "b4b021b3-f663-4f7e-ab3a-fb2eff74dbc3", 00:30:24.466 "assigned_rate_limits": { 00:30:24.466 "rw_ios_per_sec": 0, 00:30:24.466 "rw_mbytes_per_sec": 0, 00:30:24.466 "r_mbytes_per_sec": 0, 00:30:24.466 "w_mbytes_per_sec": 0 00:30:24.466 }, 00:30:24.466 "claimed": false, 00:30:24.466 "zoned": false, 00:30:24.466 "supported_io_types": { 00:30:24.466 "read": true, 00:30:24.466 "write": true, 00:30:24.466 "unmap": true, 00:30:24.466 "flush": false, 00:30:24.466 "reset": true, 00:30:24.466 "nvme_admin": false, 00:30:24.466 "nvme_io": false, 00:30:24.466 "nvme_io_md": false, 00:30:24.466 "write_zeroes": true, 00:30:24.466 "zcopy": false, 00:30:24.466 "get_zone_info": false, 00:30:24.466 "zone_management": false, 00:30:24.466 "zone_append": false, 00:30:24.466 "compare": false, 00:30:24.466 "compare_and_write": false, 00:30:24.466 "abort": false, 00:30:24.466 "seek_hole": true, 00:30:24.466 "seek_data": true, 00:30:24.466 "copy": false, 00:30:24.466 "nvme_iov_md": false 00:30:24.466 }, 00:30:24.466 "driver_specific": { 00:30:24.466 "lvol": { 00:30:24.466 "lvol_store_uuid": "c9afb7d9-655b-4a12-90d8-cc8a691ff430", 00:30:24.466 "base_bdev": "aio_bdev", 00:30:24.466 "thin_provision": false, 00:30:24.466 "num_allocated_clusters": 38, 00:30:24.466 "snapshot": false, 00:30:24.466 "clone": false, 00:30:24.466 "esnap_clone": false 00:30:24.466 } 00:30:24.466 } 00:30:24.466 } 00:30:24.466 ] 00:30:24.466 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:24.466 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:24.466 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:24.725 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:24.725 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:24.725 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:24.725 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:24.725 08:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b4b021b3-f663-4f7e-ab3a-fb2eff74dbc3 00:30:24.984 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9afb7d9-655b-4a12-90d8-cc8a691ff430 00:30:25.243 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:25.502 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:25.503 00:30:25.503 real 0m15.560s 00:30:25.503 user 0m15.147s 00:30:25.503 sys 0m1.450s 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:25.503 ************************************ 00:30:25.503 END TEST lvs_grow_clean 00:30:25.503 ************************************ 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:25.503 ************************************ 00:30:25.503 START TEST lvs_grow_dirty 00:30:25.503 ************************************ 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:25.503 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:25.762 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:25.762 08:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:26.020 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:26.020 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:26.020 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:26.020 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:26.020 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:26.020 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 lvol 150 00:30:26.279 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1717b46d-dcbc-45b4-b538-9011a5662fcb 00:30:26.279 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:26.279 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:26.537 [2024-11-28 08:29:08.617357] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:26.537 [2024-11-28 08:29:08.617430] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:26.538 true 00:30:26.538 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:26.538 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:26.796 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:26.796 08:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:26.796 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1717b46d-dcbc-45b4-b538-9011a5662fcb 00:30:27.055 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:27.314 [2024-11-28 08:29:09.417767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.314 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.573 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1556779 00:30:27.573 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:27.573 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:27.573 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1556779 /var/tmp/bdevperf.sock 00:30:27.573 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1556779 ']' 00:30:27.573 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:27.573 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.573 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:27.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:27.573 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.573 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:27.573 [2024-11-28 08:29:09.688322] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:30:27.573 [2024-11-28 08:29:09.688371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556779 ] 00:30:27.573 [2024-11-28 08:29:09.750775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.573 [2024-11-28 08:29:09.793448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.832 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.832 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:27.832 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:28.090 Nvme0n1 00:30:28.090 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:28.349 [ 00:30:28.349 { 00:30:28.349 "name": "Nvme0n1", 00:30:28.349 "aliases": [ 00:30:28.349 "1717b46d-dcbc-45b4-b538-9011a5662fcb" 00:30:28.349 ], 00:30:28.349 "product_name": "NVMe disk", 00:30:28.349 "block_size": 4096, 00:30:28.349 "num_blocks": 38912, 00:30:28.349 "uuid": "1717b46d-dcbc-45b4-b538-9011a5662fcb", 00:30:28.349 "numa_id": 1, 00:30:28.349 "assigned_rate_limits": { 00:30:28.349 "rw_ios_per_sec": 0, 00:30:28.349 "rw_mbytes_per_sec": 0, 00:30:28.349 "r_mbytes_per_sec": 0, 00:30:28.349 "w_mbytes_per_sec": 0 00:30:28.349 }, 00:30:28.349 "claimed": false, 00:30:28.349 "zoned": false, 00:30:28.349 "supported_io_types": { 00:30:28.349 "read": true, 00:30:28.349 "write": true, 00:30:28.349 "unmap": true, 00:30:28.349 "flush": true, 00:30:28.349 "reset": true, 00:30:28.349 "nvme_admin": true, 00:30:28.349 "nvme_io": true, 00:30:28.349 "nvme_io_md": false, 00:30:28.349 "write_zeroes": true, 00:30:28.349 "zcopy": false, 00:30:28.349 "get_zone_info": false, 00:30:28.349 "zone_management": false, 00:30:28.349 "zone_append": false, 00:30:28.349 "compare": true, 00:30:28.349 "compare_and_write": true, 00:30:28.349 "abort": true, 00:30:28.349 "seek_hole": false, 00:30:28.349 "seek_data": false, 00:30:28.349 "copy": true, 00:30:28.349 "nvme_iov_md": false 00:30:28.349 }, 00:30:28.349 "memory_domains": [ 00:30:28.349 { 00:30:28.349 "dma_device_id": "system", 00:30:28.349 "dma_device_type": 1 00:30:28.350 } 00:30:28.350 ], 00:30:28.350 "driver_specific": { 00:30:28.350 "nvme": [ 00:30:28.350 { 00:30:28.350 "trid": { 00:30:28.350 "trtype": "TCP", 00:30:28.350 "adrfam": "IPv4", 00:30:28.350 "traddr": "10.0.0.2", 00:30:28.350 "trsvcid": "4420", 00:30:28.350 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:28.350 }, 00:30:28.350 "ctrlr_data": { 00:30:28.350 "cntlid": 1, 00:30:28.350 "vendor_id": "0x8086", 00:30:28.350 "model_number": "SPDK bdev Controller", 00:30:28.350 "serial_number": "SPDK0", 00:30:28.350 "firmware_revision": "25.01", 00:30:28.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.350 "oacs": { 00:30:28.350 "security": 0, 00:30:28.350 "format": 0, 00:30:28.350 "firmware": 0, 00:30:28.350 "ns_manage": 0 00:30:28.350 }, 00:30:28.350 "multi_ctrlr": true, 00:30:28.350 "ana_reporting": false 00:30:28.350 }, 00:30:28.350 "vs": { 00:30:28.350 "nvme_version": "1.3" 00:30:28.350 }, 00:30:28.350 "ns_data": { 00:30:28.350 "id": 1, 00:30:28.350 "can_share": true 00:30:28.350 } 00:30:28.350 } 00:30:28.350 ], 00:30:28.350 "mp_policy": "active_passive" 00:30:28.350 } 00:30:28.350 } 00:30:28.350 ] 00:30:28.350 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1556916 00:30:28.350 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:28.350 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:28.609 Running I/O for 10 seconds... 00:30:29.545 Latency(us) 00:30:29.545 [2024-11-28T07:29:11.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:29.545 Nvme0n1 : 1.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:29.545 [2024-11-28T07:29:11.814Z] =================================================================================================================== 00:30:29.545 [2024-11-28T07:29:11.814Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:29.545 00:30:30.482 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:30.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:30.482 Nvme0n1 : 2.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:30:30.482 [2024-11-28T07:29:12.751Z] =================================================================================================================== 00:30:30.482 [2024-11-28T07:29:12.751Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:30:30.482 00:30:30.482 true 00:30:30.482 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:30.482 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:30.741 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:30.741 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:30.741 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1556916 00:30:31.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:31.677 Nvme0n1 : 3.00 22521.33 87.97 0.00 0.00 0.00 0.00 0.00 00:30:31.677 [2024-11-28T07:29:13.946Z] =================================================================================================================== 00:30:31.677 [2024-11-28T07:29:13.946Z] Total : 22521.33 87.97 0.00 0.00 0.00 0.00 0.00 00:30:31.677 00:30:32.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:32.613 Nvme0n1 : 4.00 22590.25 88.24 0.00 0.00 0.00 0.00 0.00 00:30:32.613 [2024-11-28T07:29:14.882Z] =================================================================================================================== 00:30:32.613 [2024-11-28T07:29:14.882Z] Total : 22590.25 88.24 0.00 0.00 0.00 0.00 0.00 00:30:32.613 00:30:33.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:33.548 Nvme0n1 : 5.00 22644.20 88.45 0.00 0.00 0.00 0.00 0.00 00:30:33.548 [2024-11-28T07:29:15.817Z] =================================================================================================================== 00:30:33.548 [2024-11-28T07:29:15.817Z] Total : 22644.20 88.45 0.00 0.00 0.00 0.00 0.00 00:30:33.548 00:30:34.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.486 Nvme0n1 : 6.00 22680.17 88.59 0.00 0.00 0.00 0.00 0.00 00:30:34.486 [2024-11-28T07:29:16.755Z] =================================================================================================================== 00:30:34.486 [2024-11-28T07:29:16.755Z] Total : 22680.17 88.59 0.00 0.00 0.00 0.00 0.00 00:30:34.486 00:30:35.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:35.423 Nvme0n1 : 7.00 22705.86 88.69 0.00 0.00 0.00 0.00 0.00 00:30:35.423 [2024-11-28T07:29:17.692Z] =================================================================================================================== 00:30:35.423 [2024-11-28T07:29:17.692Z] Total : 22705.86 88.69 0.00 0.00 0.00 0.00 0.00 00:30:35.423 00:30:36.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.801 Nvme0n1 : 8.00 22725.12 88.77 0.00 0.00 0.00 0.00 0.00 00:30:36.801 [2024-11-28T07:29:19.071Z] =================================================================================================================== 00:30:36.802 [2024-11-28T07:29:19.071Z] Total : 22725.12 88.77 0.00 0.00 0.00 0.00 0.00 00:30:36.802 00:30:37.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:37.739 Nvme0n1 : 9.00 22740.11 88.83 0.00 0.00 0.00 0.00 0.00 00:30:37.739 [2024-11-28T07:29:20.008Z] =================================================================================================================== 00:30:37.739 [2024-11-28T07:29:20.008Z] Total : 22740.11 88.83 0.00 0.00 0.00 0.00 0.00 00:30:37.739 00:30:38.677 00:30:38.677 Latency(us) 00:30:38.677 [2024-11-28T07:29:20.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:38.677 Nvme0n1 : 10.00 22723.37 88.76 0.00 0.00 5629.96 3761.20 15158.76 00:30:38.677 [2024-11-28T07:29:20.946Z] =================================================================================================================== 00:30:38.677 [2024-11-28T07:29:20.946Z] Total : 22723.37 88.76 0.00 0.00 5629.96 3761.20 15158.76 00:30:38.677 { 00:30:38.677 "results": [ 00:30:38.677 { 00:30:38.677 "job": "Nvme0n1", 00:30:38.677 "core_mask": "0x2", 00:30:38.677 "workload": "randwrite", 00:30:38.677 "status": "finished", 00:30:38.677 "queue_depth": 128, 00:30:38.677 "io_size": 4096, 00:30:38.677 "runtime": 10.00151, 00:30:38.677 "iops": 22723.36877131553, 00:30:38.677 "mibps": 88.76315926295129, 00:30:38.677 "io_failed": 0, 00:30:38.677 "io_timeout": 0, 00:30:38.677 "avg_latency_us": 5629.956170313386, 00:30:38.677 "min_latency_us": 3761.1965217391303, 00:30:38.677 "max_latency_us": 15158.761739130436 00:30:38.677 } 00:30:38.677 ], 00:30:38.677 "core_count": 1 00:30:38.677 } 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1556779 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1556779 ']' 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1556779 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1556779 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1556779' 00:30:38.677 killing process with pid 1556779 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1556779 00:30:38.677 Received shutdown signal, test time was about 10.000000 seconds 00:30:38.677 00:30:38.677 Latency(us) 00:30:38.677 [2024-11-28T07:29:20.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.677 [2024-11-28T07:29:20.946Z] =================================================================================================================== 00:30:38.677 [2024-11-28T07:29:20.946Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1556779 00:30:38.677 08:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:38.936 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:39.210 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:39.210 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:39.469 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:39.469 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:39.469 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1553689 00:30:39.469 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1553689 00:30:39.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1553689 Killed "${NVMF_APP[@]}" "$@" 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1558626 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1558626 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1558626 ']' 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.470 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:39.470 [2024-11-28 08:29:21.567397] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:39.470 [2024-11-28 08:29:21.568308] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:30:39.470 [2024-11-28 08:29:21.568342] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.470 [2024-11-28 08:29:21.634022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.470 [2024-11-28 08:29:21.675794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.470 [2024-11-28 08:29:21.675830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.470 [2024-11-28 08:29:21.675840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.470 [2024-11-28 08:29:21.675846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.470 [2024-11-28 08:29:21.675851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.470 [2024-11-28 08:29:21.676436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.728 [2024-11-28 08:29:21.744596] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:39.728 [2024-11-28 08:29:21.744830] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:39.728 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.728 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:39.728 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:39.729 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:39.729 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:39.729 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.729 08:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:39.988 [2024-11-28 08:29:21.995276] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:39.988 [2024-11-28 08:29:21.995385] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:39.988 [2024-11-28 08:29:21.995423] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:39.988 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:39.988 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1717b46d-dcbc-45b4-b538-9011a5662fcb 00:30:39.988 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1717b46d-dcbc-45b4-b538-9011a5662fcb 00:30:39.988 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:39.988 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:39.988 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:39.988 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:39.988 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:39.988 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1717b46d-dcbc-45b4-b538-9011a5662fcb -t 2000 00:30:40.247 [ 00:30:40.247 { 00:30:40.247 "name": "1717b46d-dcbc-45b4-b538-9011a5662fcb", 00:30:40.247 "aliases": [ 00:30:40.247 "lvs/lvol" 00:30:40.247 ], 00:30:40.247 "product_name": "Logical Volume", 00:30:40.247 "block_size": 4096, 00:30:40.247 "num_blocks": 38912, 00:30:40.247 "uuid": "1717b46d-dcbc-45b4-b538-9011a5662fcb", 00:30:40.247 "assigned_rate_limits": { 00:30:40.247 "rw_ios_per_sec": 0, 00:30:40.247 "rw_mbytes_per_sec": 0, 00:30:40.247 "r_mbytes_per_sec": 0, 00:30:40.247 "w_mbytes_per_sec": 0 00:30:40.247 }, 00:30:40.247 "claimed": false, 00:30:40.247 "zoned": false, 00:30:40.247 "supported_io_types": { 00:30:40.247 "read": true, 00:30:40.247 "write": true, 00:30:40.247 "unmap": true, 00:30:40.247 "flush": false, 00:30:40.247 "reset": true, 00:30:40.247 "nvme_admin": false, 00:30:40.247 "nvme_io": false, 00:30:40.247 "nvme_io_md": false, 00:30:40.247 "write_zeroes": true, 00:30:40.247 "zcopy": false, 00:30:40.247 "get_zone_info": false, 00:30:40.247 "zone_management": false, 00:30:40.247 "zone_append": false, 00:30:40.247 "compare": false, 00:30:40.247 "compare_and_write": false, 00:30:40.247 "abort": false, 00:30:40.247 "seek_hole": true, 00:30:40.247 "seek_data": true, 00:30:40.247 "copy": false, 00:30:40.247 "nvme_iov_md": false 00:30:40.247 }, 00:30:40.247 "driver_specific": { 00:30:40.247 "lvol": { 00:30:40.247 "lvol_store_uuid": "aa75670d-93db-41d3-99a0-4f2d1ff7aa24", 00:30:40.247 "base_bdev": "aio_bdev", 00:30:40.247 "thin_provision": false, 00:30:40.247 "num_allocated_clusters": 38, 00:30:40.247 "snapshot": false, 00:30:40.247 "clone": false, 00:30:40.247 "esnap_clone": false 00:30:40.247 } 00:30:40.247 } 00:30:40.247 } 00:30:40.247 ] 00:30:40.247 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:40.247 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:40.247 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:40.506 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:40.507 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:40.507 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:40.766 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:40.766 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:40.766 [2024-11-28 08:29:22.952795] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:40.766 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:41.025 request: 00:30:41.025 { 00:30:41.025 "uuid": "aa75670d-93db-41d3-99a0-4f2d1ff7aa24", 00:30:41.025 "method": "bdev_lvol_get_lvstores", 00:30:41.025 "req_id": 1 00:30:41.025 } 00:30:41.025 Got JSON-RPC error response 00:30:41.025 response: 00:30:41.025 { 00:30:41.025 "code": -19, 00:30:41.025 "message": "No such device" 00:30:41.025 } 00:30:41.025 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:41.025 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:41.025 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:41.025 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:41.025 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:41.284 aio_bdev 00:30:41.284 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1717b46d-dcbc-45b4-b538-9011a5662fcb 00:30:41.284 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1717b46d-dcbc-45b4-b538-9011a5662fcb 00:30:41.284 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:41.284 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:41.284 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:41.284 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:41.284 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:41.543 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1717b46d-dcbc-45b4-b538-9011a5662fcb -t 2000 00:30:41.543 [ 00:30:41.543 { 00:30:41.543 "name": "1717b46d-dcbc-45b4-b538-9011a5662fcb", 00:30:41.543 "aliases": [ 00:30:41.543 "lvs/lvol" 00:30:41.543 ], 00:30:41.543 "product_name": "Logical Volume", 00:30:41.543 "block_size": 4096, 00:30:41.543 "num_blocks": 38912, 00:30:41.543 "uuid": "1717b46d-dcbc-45b4-b538-9011a5662fcb", 00:30:41.543 "assigned_rate_limits": { 00:30:41.543 "rw_ios_per_sec": 0, 00:30:41.543 "rw_mbytes_per_sec": 0, 00:30:41.543 "r_mbytes_per_sec": 0, 00:30:41.543 "w_mbytes_per_sec": 0 00:30:41.543 }, 00:30:41.543 "claimed": false, 00:30:41.543 "zoned": false, 00:30:41.543 "supported_io_types": { 00:30:41.543 "read": true, 00:30:41.543 "write": true, 00:30:41.543 "unmap": true, 00:30:41.543 "flush": false, 00:30:41.543 "reset": true, 00:30:41.543 "nvme_admin": false, 00:30:41.543 "nvme_io": false, 00:30:41.543 "nvme_io_md": false, 00:30:41.543 "write_zeroes": true, 00:30:41.543 "zcopy": false, 00:30:41.543 "get_zone_info": false, 00:30:41.543 "zone_management": false, 00:30:41.543 "zone_append": false, 00:30:41.543 "compare": false, 00:30:41.543 "compare_and_write": false, 00:30:41.543 "abort": false, 00:30:41.543 "seek_hole": true, 00:30:41.543 "seek_data": true, 00:30:41.543 "copy": false, 00:30:41.543 "nvme_iov_md": false 00:30:41.543 }, 00:30:41.543 "driver_specific": { 00:30:41.543 "lvol": { 00:30:41.543 "lvol_store_uuid": "aa75670d-93db-41d3-99a0-4f2d1ff7aa24", 00:30:41.543 "base_bdev": "aio_bdev", 00:30:41.543 "thin_provision": false, 00:30:41.543 "num_allocated_clusters": 38, 00:30:41.543 "snapshot": false, 00:30:41.543 "clone": false, 00:30:41.543 "esnap_clone": false 00:30:41.543 } 00:30:41.543 } 00:30:41.543 } 00:30:41.543 ] 00:30:41.544 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:41.544 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:41.544 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:41.802 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:41.802 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:41.802 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:42.061 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:42.061 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1717b46d-dcbc-45b4-b538-9011a5662fcb 00:30:42.320 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa75670d-93db-41d3-99a0-4f2d1ff7aa24 00:30:42.320 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:42.579 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:42.579 00:30:42.579 real 0m17.181s 00:30:42.579 user 0m34.588s 00:30:42.579 sys 0m3.730s 00:30:42.579 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.580 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:42.580 ************************************ 00:30:42.580 END TEST lvs_grow_dirty 00:30:42.580 ************************************ 00:30:42.580 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:42.580 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:42.580 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:42.580 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:42.580 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:42.580 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:42.580 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:42.580 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:42.580 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:42.580 nvmf_trace.0 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:42.839 rmmod nvme_tcp 00:30:42.839 rmmod nvme_fabrics 00:30:42.839 rmmod nvme_keyring 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1558626 ']' 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1558626 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1558626 ']' 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1558626 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1558626 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:42.839 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:42.839 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1558626' 00:30:42.839 killing process with pid 1558626 00:30:42.839 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1558626 00:30:42.839 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1558626 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.098 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.001 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:45.001 00:30:45.001 real 0m41.378s 00:30:45.001 user 0m52.075s 00:30:45.001 sys 0m9.688s 00:30:45.001 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.001 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.001 ************************************ 00:30:45.001 END TEST nvmf_lvs_grow 00:30:45.001 ************************************ 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:45.261 ************************************ 00:30:45.261 START TEST nvmf_bdev_io_wait 00:30:45.261 ************************************ 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:45.261 * Looking for test storage... 00:30:45.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:45.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.261 --rc genhtml_branch_coverage=1 00:30:45.261 --rc genhtml_function_coverage=1 00:30:45.261 --rc genhtml_legend=1 00:30:45.261 --rc geninfo_all_blocks=1 00:30:45.261 --rc geninfo_unexecuted_blocks=1 00:30:45.261 00:30:45.261 ' 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:45.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.261 --rc genhtml_branch_coverage=1 00:30:45.261 --rc genhtml_function_coverage=1 00:30:45.261 --rc genhtml_legend=1 00:30:45.261 --rc geninfo_all_blocks=1 00:30:45.261 --rc geninfo_unexecuted_blocks=1 00:30:45.261 00:30:45.261 ' 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:45.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.261 --rc genhtml_branch_coverage=1 00:30:45.261 --rc genhtml_function_coverage=1 00:30:45.261 --rc genhtml_legend=1 00:30:45.261 --rc geninfo_all_blocks=1 00:30:45.261 --rc geninfo_unexecuted_blocks=1 00:30:45.261 00:30:45.261 ' 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:45.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.261 --rc genhtml_branch_coverage=1 00:30:45.261 --rc genhtml_function_coverage=1 00:30:45.261 --rc genhtml_legend=1 00:30:45.261 --rc geninfo_all_blocks=1 00:30:45.261 --rc geninfo_unexecuted_blocks=1 00:30:45.261 00:30:45.261 ' 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.261 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.262 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:45.521 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:45.521 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:45.521 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:50.792 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.792 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:50.793 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:50.793 Found net devices under 0000:86:00.0: cvl_0_0 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:50.793 Found net devices under 0000:86:00.1: cvl_0_1 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:30:50.793 00:30:50.793 --- 10.0.0.2 ping statistics --- 00:30:50.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.793 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:50.793 00:30:50.793 --- 10.0.0.1 ping statistics --- 00:30:50.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.793 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1562663 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1562663 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1562663 ']' 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.793 08:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.793 [2024-11-28 08:29:32.990916] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:50.793 [2024-11-28 08:29:32.991863] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:30:50.793 [2024-11-28 08:29:32.991897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.053 [2024-11-28 08:29:33.059404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.053 [2024-11-28 08:29:33.104067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.053 [2024-11-28 08:29:33.104104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.053 [2024-11-28 08:29:33.104111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.053 [2024-11-28 08:29:33.104118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.053 [2024-11-28 08:29:33.104123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.054 [2024-11-28 08:29:33.105556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.054 [2024-11-28 08:29:33.105672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.054 [2024-11-28 08:29:33.105840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.054 [2024-11-28 08:29:33.105843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.054 [2024-11-28 08:29:33.106153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.054 [2024-11-28 08:29:33.233226] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:51.054 [2024-11-28 08:29:33.233281] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:51.054 [2024-11-28 08:29:33.233866] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:51.054 [2024-11-28 08:29:33.234287] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.054 [2024-11-28 08:29:33.246311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.054 Malloc0 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.054 [2024-11-28 08:29:33.302451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1562747 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1562750 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:51.054 { 00:30:51.054 "params": { 00:30:51.054 "name": "Nvme$subsystem", 00:30:51.054 "trtype": "$TEST_TRANSPORT", 00:30:51.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.054 "adrfam": "ipv4", 00:30:51.054 "trsvcid": "$NVMF_PORT", 00:30:51.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.054 "hdgst": ${hdgst:-false}, 00:30:51.054 "ddgst": ${ddgst:-false} 00:30:51.054 }, 00:30:51.054 "method": "bdev_nvme_attach_controller" 00:30:51.054 } 00:30:51.054 EOF 00:30:51.054 )") 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1562753 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:51.054 { 00:30:51.054 "params": { 00:30:51.054 "name": "Nvme$subsystem", 00:30:51.054 "trtype": "$TEST_TRANSPORT", 00:30:51.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.054 "adrfam": "ipv4", 00:30:51.054 "trsvcid": "$NVMF_PORT", 00:30:51.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.054 "hdgst": ${hdgst:-false}, 00:30:51.054 "ddgst": ${ddgst:-false} 00:30:51.054 }, 00:30:51.054 "method": "bdev_nvme_attach_controller" 00:30:51.054 } 00:30:51.054 EOF 00:30:51.054 )") 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1562757 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:51.054 { 00:30:51.054 "params": { 00:30:51.054 "name": "Nvme$subsystem", 00:30:51.054 "trtype": "$TEST_TRANSPORT", 00:30:51.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.054 "adrfam": "ipv4", 00:30:51.054 "trsvcid": "$NVMF_PORT", 00:30:51.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.054 "hdgst": ${hdgst:-false}, 00:30:51.054 "ddgst": ${ddgst:-false} 00:30:51.054 }, 00:30:51.054 "method": "bdev_nvme_attach_controller" 00:30:51.054 } 00:30:51.054 EOF 00:30:51.054 )") 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:51.054 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:51.055 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:51.055 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:51.055 { 00:30:51.055 "params": { 00:30:51.055 "name": "Nvme$subsystem", 00:30:51.055 "trtype": "$TEST_TRANSPORT", 00:30:51.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.055 "adrfam": "ipv4", 00:30:51.055 "trsvcid": "$NVMF_PORT", 00:30:51.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.055 "hdgst": ${hdgst:-false}, 00:30:51.055 "ddgst": ${ddgst:-false} 00:30:51.055 }, 00:30:51.055 "method": "bdev_nvme_attach_controller" 00:30:51.055 } 00:30:51.055 EOF 00:30:51.055 )") 00:30:51.055 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:51.055 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1562747 00:30:51.055 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:51.055 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:51.314 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:51.314 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:51.314 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:51.314 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:51.314 "params": { 00:30:51.314 "name": "Nvme1", 00:30:51.314 "trtype": "tcp", 00:30:51.314 "traddr": "10.0.0.2", 00:30:51.314 "adrfam": "ipv4", 00:30:51.314 "trsvcid": "4420", 00:30:51.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.314 "hdgst": false, 00:30:51.314 "ddgst": false 00:30:51.314 }, 00:30:51.314 "method": "bdev_nvme_attach_controller" 00:30:51.314 }' 00:30:51.314 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:51.314 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:51.314 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:51.314 "params": { 00:30:51.314 "name": "Nvme1", 00:30:51.314 "trtype": "tcp", 00:30:51.314 "traddr": "10.0.0.2", 00:30:51.314 "adrfam": "ipv4", 00:30:51.314 "trsvcid": "4420", 00:30:51.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.314 "hdgst": false, 00:30:51.314 "ddgst": false 00:30:51.314 }, 00:30:51.314 "method": "bdev_nvme_attach_controller" 00:30:51.314 }' 00:30:51.314 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:51.314 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:51.314 "params": { 00:30:51.314 "name": "Nvme1", 00:30:51.314 "trtype": "tcp", 00:30:51.314 "traddr": "10.0.0.2", 00:30:51.314 "adrfam": "ipv4", 00:30:51.314 "trsvcid": "4420", 00:30:51.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.314 "hdgst": false, 00:30:51.314 "ddgst": false 00:30:51.314 }, 00:30:51.314 "method": "bdev_nvme_attach_controller" 00:30:51.314 }' 00:30:51.314 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:51.314 08:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:51.314 "params": { 00:30:51.314 "name": "Nvme1", 00:30:51.314 "trtype": "tcp", 00:30:51.314 "traddr": "10.0.0.2", 00:30:51.314 "adrfam": "ipv4", 00:30:51.314 "trsvcid": "4420", 00:30:51.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.314 "hdgst": false, 00:30:51.314 "ddgst": false 00:30:51.314 }, 00:30:51.314 "method": "bdev_nvme_attach_controller" 00:30:51.314 }' 00:30:51.314 [2024-11-28 08:29:33.353862] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:30:51.314 [2024-11-28 08:29:33.353917] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:51.314 [2024-11-28 08:29:33.354859] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:30:51.314 [2024-11-28 08:29:33.354905] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:51.314 [2024-11-28 08:29:33.356862] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:30:51.314 [2024-11-28 08:29:33.356909] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:51.314 [2024-11-28 08:29:33.358456] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:30:51.314 [2024-11-28 08:29:33.358499] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:51.314 [2024-11-28 08:29:33.548256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.573 [2024-11-28 08:29:33.591320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:51.573 [2024-11-28 08:29:33.640604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.573 [2024-11-28 08:29:33.691931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.573 [2024-11-28 08:29:33.695674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:51.573 [2024-11-28 08:29:33.734858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:51.573 [2024-11-28 08:29:33.751930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.573 [2024-11-28 08:29:33.794698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:51.832 Running I/O for 1 seconds... 00:30:51.832 Running I/O for 1 seconds... 00:30:51.832 Running I/O for 1 seconds... 00:30:51.832 Running I/O for 1 seconds... 00:30:52.769 236824.00 IOPS, 925.09 MiB/s 00:30:52.769 Latency(us) 00:30:52.769 [2024-11-28T07:29:35.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.769 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:52.769 Nvme1n1 : 1.00 236453.91 923.65 0.00 0.00 538.38 231.51 1567.17 00:30:52.769 [2024-11-28T07:29:35.038Z] =================================================================================================================== 00:30:52.769 [2024-11-28T07:29:35.038Z] Total : 236453.91 923.65 0.00 0.00 538.38 231.51 1567.17 00:30:52.769 8351.00 IOPS, 32.62 MiB/s 00:30:52.769 Latency(us) 00:30:52.769 [2024-11-28T07:29:35.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.769 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:52.769 Nvme1n1 : 1.02 8357.13 32.65 0.00 0.00 15198.65 3447.76 23023.08 00:30:52.769 [2024-11-28T07:29:35.038Z] =================================================================================================================== 00:30:52.769 [2024-11-28T07:29:35.038Z] Total : 8357.13 32.65 0.00 0.00 15198.65 3447.76 23023.08 00:30:52.769 11681.00 IOPS, 45.63 MiB/s 00:30:52.769 Latency(us) 00:30:52.769 [2024-11-28T07:29:35.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.769 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:52.769 Nvme1n1 : 1.01 11742.69 45.87 0.00 0.00 10864.65 5128.90 16070.57 00:30:52.769 [2024-11-28T07:29:35.038Z] =================================================================================================================== 00:30:52.769 [2024-11-28T07:29:35.038Z] Total : 11742.69 45.87 0.00 0.00 10864.65 5128.90 16070.57 00:30:52.769 7612.00 IOPS, 29.73 MiB/s 00:30:52.769 Latency(us) 00:30:52.769 [2024-11-28T07:29:35.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.769 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:52.769 Nvme1n1 : 1.00 7711.88 30.12 0.00 0.00 16562.17 3234.06 31229.33 00:30:52.769 [2024-11-28T07:29:35.038Z] =================================================================================================================== 00:30:52.769 [2024-11-28T07:29:35.038Z] Total : 7711.88 30.12 0.00 0.00 16562.17 3234.06 31229.33 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1562750 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1562753 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1562757 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:53.027 rmmod nvme_tcp 00:30:53.027 rmmod nvme_fabrics 00:30:53.027 rmmod nvme_keyring 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1562663 ']' 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1562663 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1562663 ']' 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1562663 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1562663 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1562663' 00:30:53.027 killing process with pid 1562663 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1562663 00:30:53.027 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1562663 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.285 08:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:55.818 00:30:55.818 real 0m10.179s 00:30:55.818 user 0m14.849s 00:30:55.818 sys 0m5.954s 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.818 ************************************ 00:30:55.818 END TEST nvmf_bdev_io_wait 00:30:55.818 ************************************ 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:55.818 ************************************ 00:30:55.818 START TEST nvmf_queue_depth 00:30:55.818 ************************************ 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:55.818 * Looking for test storage... 00:30:55.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.818 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:55.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.819 --rc genhtml_branch_coverage=1 00:30:55.819 --rc genhtml_function_coverage=1 00:30:55.819 --rc genhtml_legend=1 00:30:55.819 --rc geninfo_all_blocks=1 00:30:55.819 --rc geninfo_unexecuted_blocks=1 00:30:55.819 00:30:55.819 ' 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:55.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.819 --rc genhtml_branch_coverage=1 00:30:55.819 --rc genhtml_function_coverage=1 00:30:55.819 --rc genhtml_legend=1 00:30:55.819 --rc geninfo_all_blocks=1 00:30:55.819 --rc geninfo_unexecuted_blocks=1 00:30:55.819 00:30:55.819 ' 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:55.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.819 --rc genhtml_branch_coverage=1 00:30:55.819 --rc genhtml_function_coverage=1 00:30:55.819 --rc genhtml_legend=1 00:30:55.819 --rc geninfo_all_blocks=1 00:30:55.819 --rc geninfo_unexecuted_blocks=1 00:30:55.819 00:30:55.819 ' 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:55.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.819 --rc genhtml_branch_coverage=1 00:30:55.819 --rc genhtml_function_coverage=1 00:30:55.819 --rc genhtml_legend=1 00:30:55.819 --rc geninfo_all_blocks=1 00:30:55.819 --rc geninfo_unexecuted_blocks=1 00:30:55.819 00:30:55.819 ' 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:55.819 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:55.820 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.089 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:01.090 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:01.090 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:01.090 Found net devices under 0000:86:00.0: cvl_0_0 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:01.090 Found net devices under 0000:86:00.1: cvl_0_1 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.090 08:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.090 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.090 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:01.090 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.090 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:01.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:31:01.091 00:31:01.091 --- 10.0.0.2 ping statistics --- 00:31:01.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.091 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:31:01.091 00:31:01.091 --- 10.0.0.1 ping statistics --- 00:31:01.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.091 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1566472 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1566472 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1566472 ']' 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.091 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.091 [2024-11-28 08:29:43.199772] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:01.091 [2024-11-28 08:29:43.200721] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:31:01.091 [2024-11-28 08:29:43.200755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.091 [2024-11-28 08:29:43.269275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.091 [2024-11-28 08:29:43.310702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.091 [2024-11-28 08:29:43.310739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.091 [2024-11-28 08:29:43.310746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.091 [2024-11-28 08:29:43.310752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.091 [2024-11-28 08:29:43.310757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.091 [2024-11-28 08:29:43.311336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.350 [2024-11-28 08:29:43.379513] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:01.350 [2024-11-28 08:29:43.379750] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.350 [2024-11-28 08:29:43.447893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.350 Malloc0 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.350 [2024-11-28 08:29:43.503874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1566629 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:01.350 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1566629 /var/tmp/bdevperf.sock 00:31:01.351 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1566629 ']' 00:31:01.351 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:01.351 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.351 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:01.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:01.351 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.351 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.351 [2024-11-28 08:29:43.552429] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:31:01.351 [2024-11-28 08:29:43.552472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566629 ] 00:31:01.351 [2024-11-28 08:29:43.613878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.610 [2024-11-28 08:29:43.658092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.610 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:01.610 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:01.610 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:01.610 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.610 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.868 NVMe0n1 00:31:01.868 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.868 08:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:01.868 Running I/O for 10 seconds... 00:31:03.813 11245.00 IOPS, 43.93 MiB/s [2024-11-28T07:29:47.462Z] 11626.50 IOPS, 45.42 MiB/s [2024-11-28T07:29:48.399Z] 11869.67 IOPS, 46.37 MiB/s [2024-11-28T07:29:49.335Z] 11918.25 IOPS, 46.56 MiB/s [2024-11-28T07:29:50.271Z] 11954.20 IOPS, 46.70 MiB/s [2024-11-28T07:29:51.207Z] 11939.67 IOPS, 46.64 MiB/s [2024-11-28T07:29:52.143Z] 11981.57 IOPS, 46.80 MiB/s [2024-11-28T07:29:53.522Z] 11977.75 IOPS, 46.79 MiB/s [2024-11-28T07:29:54.458Z] 12020.78 IOPS, 46.96 MiB/s [2024-11-28T07:29:54.458Z] 12031.30 IOPS, 47.00 MiB/s 00:31:12.189 Latency(us) 00:31:12.189 [2024-11-28T07:29:54.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.189 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:12.189 Verification LBA range: start 0x0 length 0x4000 00:31:12.189 NVMe0n1 : 10.05 12045.14 47.05 0.00 0.00 84701.16 13848.04 56531.92 00:31:12.190 [2024-11-28T07:29:54.459Z] =================================================================================================================== 00:31:12.190 [2024-11-28T07:29:54.459Z] Total : 12045.14 47.05 0.00 0.00 84701.16 13848.04 56531.92 00:31:12.190 { 00:31:12.190 "results": [ 00:31:12.190 { 00:31:12.190 "job": "NVMe0n1", 00:31:12.190 "core_mask": "0x1", 00:31:12.190 "workload": "verify", 00:31:12.190 "status": "finished", 00:31:12.190 "verify_range": { 00:31:12.190 "start": 0, 00:31:12.190 "length": 16384 00:31:12.190 }, 00:31:12.190 "queue_depth": 1024, 00:31:12.190 "io_size": 4096, 00:31:12.190 "runtime": 10.054764, 00:31:12.190 "iops": 12045.136017115867, 00:31:12.190 "mibps": 47.051312566858854, 00:31:12.190 "io_failed": 0, 00:31:12.190 "io_timeout": 0, 00:31:12.190 "avg_latency_us": 84701.16434457358, 00:31:12.190 "min_latency_us": 13848.041739130435, 00:31:12.190 "max_latency_us": 56531.92347826087 00:31:12.190 } 00:31:12.190 ], 00:31:12.190 "core_count": 1 00:31:12.190 } 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1566629 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1566629 ']' 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1566629 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566629 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566629' 00:31:12.190 killing process with pid 1566629 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1566629 00:31:12.190 Received shutdown signal, test time was about 10.000000 seconds 00:31:12.190 00:31:12.190 Latency(us) 00:31:12.190 [2024-11-28T07:29:54.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.190 [2024-11-28T07:29:54.459Z] =================================================================================================================== 00:31:12.190 [2024-11-28T07:29:54.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1566629 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.190 rmmod nvme_tcp 00:31:12.190 rmmod nvme_fabrics 00:31:12.190 rmmod nvme_keyring 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1566472 ']' 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1566472 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1566472 ']' 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1566472 00:31:12.190 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:12.449 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.449 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566472 00:31:12.449 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:12.449 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:12.449 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566472' 00:31:12.449 killing process with pid 1566472 00:31:12.449 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1566472 00:31:12.449 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1566472 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.450 08:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.985 00:31:14.985 real 0m19.203s 00:31:14.985 user 0m22.886s 00:31:14.985 sys 0m5.724s 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:14.985 ************************************ 00:31:14.985 END TEST nvmf_queue_depth 00:31:14.985 ************************************ 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:14.985 ************************************ 00:31:14.985 START TEST nvmf_target_multipath 00:31:14.985 ************************************ 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:14.985 * Looking for test storage... 00:31:14.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.985 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:14.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.986 --rc genhtml_branch_coverage=1 00:31:14.986 --rc genhtml_function_coverage=1 00:31:14.986 --rc genhtml_legend=1 00:31:14.986 --rc geninfo_all_blocks=1 00:31:14.986 --rc geninfo_unexecuted_blocks=1 00:31:14.986 00:31:14.986 ' 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:14.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.986 --rc genhtml_branch_coverage=1 00:31:14.986 --rc genhtml_function_coverage=1 00:31:14.986 --rc genhtml_legend=1 00:31:14.986 --rc geninfo_all_blocks=1 00:31:14.986 --rc geninfo_unexecuted_blocks=1 00:31:14.986 00:31:14.986 ' 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:14.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.986 --rc genhtml_branch_coverage=1 00:31:14.986 --rc genhtml_function_coverage=1 00:31:14.986 --rc genhtml_legend=1 00:31:14.986 --rc geninfo_all_blocks=1 00:31:14.986 --rc geninfo_unexecuted_blocks=1 00:31:14.986 00:31:14.986 ' 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:14.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.986 --rc genhtml_branch_coverage=1 00:31:14.986 --rc genhtml_function_coverage=1 00:31:14.986 --rc genhtml_legend=1 00:31:14.986 --rc geninfo_all_blocks=1 00:31:14.986 --rc geninfo_unexecuted_blocks=1 00:31:14.986 00:31:14.986 ' 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.986 08:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:14.986 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:14.987 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.987 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:14.987 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:14.987 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:14.987 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.987 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.987 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.987 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:14.987 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:14.987 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:14.987 08:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:20.262 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.262 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.262 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:20.263 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:20.263 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:20.263 Found net devices under 0000:86:00.0: cvl_0_0 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:20.263 Found net devices under 0000:86:00.1: cvl_0_1 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.263 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:31:20.264 00:31:20.264 --- 10.0.0.2 ping statistics --- 00:31:20.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.264 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:31:20.264 00:31:20.264 --- 10.0.0.1 ping statistics --- 00:31:20.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.264 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:20.264 only one NIC for nvmf test 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:20.264 rmmod nvme_tcp 00:31:20.264 rmmod nvme_fabrics 00:31:20.264 rmmod nvme_keyring 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:20.264 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:20.524 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:20.524 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:20.524 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:20.524 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:20.524 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:20.524 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.524 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.524 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:22.428 00:31:22.428 real 0m7.822s 00:31:22.428 user 0m1.746s 00:31:22.428 sys 0m4.089s 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:22.428 ************************************ 00:31:22.428 END TEST nvmf_target_multipath 00:31:22.428 ************************************ 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:22.428 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:22.429 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:22.689 ************************************ 00:31:22.689 START TEST nvmf_zcopy 00:31:22.689 ************************************ 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:22.689 * Looking for test storage... 00:31:22.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:22.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.689 --rc genhtml_branch_coverage=1 00:31:22.689 --rc genhtml_function_coverage=1 00:31:22.689 --rc genhtml_legend=1 00:31:22.689 --rc geninfo_all_blocks=1 00:31:22.689 --rc geninfo_unexecuted_blocks=1 00:31:22.689 00:31:22.689 ' 00:31:22.689 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:22.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.689 --rc genhtml_branch_coverage=1 00:31:22.689 --rc genhtml_function_coverage=1 00:31:22.689 --rc genhtml_legend=1 00:31:22.689 --rc geninfo_all_blocks=1 00:31:22.689 --rc geninfo_unexecuted_blocks=1 00:31:22.689 00:31:22.689 ' 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:22.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.690 --rc genhtml_branch_coverage=1 00:31:22.690 --rc genhtml_function_coverage=1 00:31:22.690 --rc genhtml_legend=1 00:31:22.690 --rc geninfo_all_blocks=1 00:31:22.690 --rc geninfo_unexecuted_blocks=1 00:31:22.690 00:31:22.690 ' 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:22.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.690 --rc genhtml_branch_coverage=1 00:31:22.690 --rc genhtml_function_coverage=1 00:31:22.690 --rc genhtml_legend=1 00:31:22.690 --rc geninfo_all_blocks=1 00:31:22.690 --rc geninfo_unexecuted_blocks=1 00:31:22.690 00:31:22.690 ' 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:22.690 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:22.691 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:27.975 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:27.975 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:27.975 Found net devices under 0000:86:00.0: cvl_0_0 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:27.975 Found net devices under 0000:86:00.1: cvl_0_1 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.975 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:27.976 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:27.976 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.976 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:27.976 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:27.976 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:27.976 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:27.976 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:28.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:31:28.233 00:31:28.233 --- 10.0.0.2 ping statistics --- 00:31:28.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.233 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:31:28.233 00:31:28.233 --- 10.0.0.1 ping statistics --- 00:31:28.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.233 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1575651 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1575651 00:31:28.233 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1575651 ']' 00:31:28.234 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.234 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.234 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.234 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.234 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:28.234 [2024-11-28 08:30:10.357073] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:28.234 [2024-11-28 08:30:10.357963] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:31:28.234 [2024-11-28 08:30:10.357995] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.234 [2024-11-28 08:30:10.425369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.234 [2024-11-28 08:30:10.467105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.234 [2024-11-28 08:30:10.467141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.234 [2024-11-28 08:30:10.467148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.234 [2024-11-28 08:30:10.467154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.234 [2024-11-28 08:30:10.467160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.234 [2024-11-28 08:30:10.467722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.492 [2024-11-28 08:30:10.536131] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:28.492 [2024-11-28 08:30:10.536350] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:28.492 [2024-11-28 08:30:10.596133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:28.492 [2024-11-28 08:30:10.616279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:28.492 malloc0 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:28.492 { 00:31:28.492 "params": { 00:31:28.492 "name": "Nvme$subsystem", 00:31:28.492 "trtype": "$TEST_TRANSPORT", 00:31:28.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.492 "adrfam": "ipv4", 00:31:28.492 "trsvcid": "$NVMF_PORT", 00:31:28.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.492 "hdgst": ${hdgst:-false}, 00:31:28.492 "ddgst": ${ddgst:-false} 00:31:28.492 }, 00:31:28.492 "method": "bdev_nvme_attach_controller" 00:31:28.492 } 00:31:28.492 EOF 00:31:28.492 )") 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:28.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:28.492 "params": { 00:31:28.492 "name": "Nvme1", 00:31:28.493 "trtype": "tcp", 00:31:28.493 "traddr": "10.0.0.2", 00:31:28.493 "adrfam": "ipv4", 00:31:28.493 "trsvcid": "4420", 00:31:28.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.493 "hdgst": false, 00:31:28.493 "ddgst": false 00:31:28.493 }, 00:31:28.493 "method": "bdev_nvme_attach_controller" 00:31:28.493 }' 00:31:28.493 [2024-11-28 08:30:10.687538] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:31:28.493 [2024-11-28 08:30:10.687582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575749 ] 00:31:28.493 [2024-11-28 08:30:10.749958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.751 [2024-11-28 08:30:10.791600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.751 Running I/O for 10 seconds... 00:31:31.065 8290.00 IOPS, 64.77 MiB/s [2024-11-28T07:30:14.270Z] 8354.00 IOPS, 65.27 MiB/s [2024-11-28T07:30:15.206Z] 8367.00 IOPS, 65.37 MiB/s [2024-11-28T07:30:16.143Z] 8384.50 IOPS, 65.50 MiB/s [2024-11-28T07:30:17.081Z] 8391.60 IOPS, 65.56 MiB/s [2024-11-28T07:30:18.018Z] 8389.00 IOPS, 65.54 MiB/s [2024-11-28T07:30:19.416Z] 8399.29 IOPS, 65.62 MiB/s [2024-11-28T07:30:19.989Z] 8395.25 IOPS, 65.59 MiB/s [2024-11-28T07:30:21.369Z] 8404.22 IOPS, 65.66 MiB/s [2024-11-28T07:30:21.369Z] 8399.90 IOPS, 65.62 MiB/s 00:31:39.100 Latency(us) 00:31:39.100 [2024-11-28T07:30:21.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.100 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:39.100 Verification LBA range: start 0x0 length 0x1000 00:31:39.100 Nvme1n1 : 10.01 8401.79 65.64 0.00 0.00 15190.93 2578.70 21427.42 00:31:39.100 [2024-11-28T07:30:21.369Z] =================================================================================================================== 00:31:39.100 [2024-11-28T07:30:21.369Z] Total : 8401.79 65.64 0.00 0.00 15190.93 2578.70 21427.42 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1577494 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:39.100 { 00:31:39.100 "params": { 00:31:39.100 "name": "Nvme$subsystem", 00:31:39.100 "trtype": "$TEST_TRANSPORT", 00:31:39.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.100 "adrfam": "ipv4", 00:31:39.100 "trsvcid": "$NVMF_PORT", 00:31:39.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.100 "hdgst": ${hdgst:-false}, 00:31:39.100 "ddgst": ${ddgst:-false} 00:31:39.100 }, 00:31:39.100 "method": "bdev_nvme_attach_controller" 00:31:39.100 } 00:31:39.100 EOF 00:31:39.100 )") 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:39.100 [2024-11-28 08:30:21.152074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.152110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:39.100 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:39.100 "params": { 00:31:39.100 "name": "Nvme1", 00:31:39.100 "trtype": "tcp", 00:31:39.100 "traddr": "10.0.0.2", 00:31:39.100 "adrfam": "ipv4", 00:31:39.100 "trsvcid": "4420", 00:31:39.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:39.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:39.100 "hdgst": false, 00:31:39.100 "ddgst": false 00:31:39.100 }, 00:31:39.100 "method": "bdev_nvme_attach_controller" 00:31:39.100 }' 00:31:39.100 [2024-11-28 08:30:21.164034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.164047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.172966] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:31:39.100 [2024-11-28 08:30:21.173007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577494 ] 00:31:39.100 [2024-11-28 08:30:21.176029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.176039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.188026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.188035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.200043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.200053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.212026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.212036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.224025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.224035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.234220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.100 [2024-11-28 08:30:21.236025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.236034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.248029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.248045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.260025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.260035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.272032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.272045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.276426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.100 [2024-11-28 08:30:21.284027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.284037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.296039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.296057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.308032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.308048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.320029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.320041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.332030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.332041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.344030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.344041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.100 [2024-11-28 08:30:21.356027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.100 [2024-11-28 08:30:21.356036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.368046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.368068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.380035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.380049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.392029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.392043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.404026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.404036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.416025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.416035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.428027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.428040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.440032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.440046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.452028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.452037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.464027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.464036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.476027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.476036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.488030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.488044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.500030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.500043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.512028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.512038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.524027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.524038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.536027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.536039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.548026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.548035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.560026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.560035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.572027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.572038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.584033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.584051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 Running I/O for 5 seconds... 00:31:39.360 [2024-11-28 08:30:21.601231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.601251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.360 [2024-11-28 08:30:21.616366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.360 [2024-11-28 08:30:21.616383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.632345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.632363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.648086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.648105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.659703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.659722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.674402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.674420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.689734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.689754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.705167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.705186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.720573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.720591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.736364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.736382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.752622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.752640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.768811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.768829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.784781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.784800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.799821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.799840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.813261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.813279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.828607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.828625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.844415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.844432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.857147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.857164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.872264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.872282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.620 [2024-11-28 08:30:21.883274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.620 [2024-11-28 08:30:21.883293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:21.897648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:21.897666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:21.912684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:21.912703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:21.927958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:21.927994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:21.939121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:21.939147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:21.954318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:21.954337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:21.969395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:21.969414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:21.984637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:21.984655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:22.000487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:22.000505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:22.015843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:22.015862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:22.030613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:22.030631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:22.046204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:22.046223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:22.061311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:22.061329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:22.076175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:22.076194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:22.086953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:22.086972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:22.101643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:22.101662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:22.117195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:22.117213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:22.132227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:22.132245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.880 [2024-11-28 08:30:22.144620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.880 [2024-11-28 08:30:22.144638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.139 [2024-11-28 08:30:22.159706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.139 [2024-11-28 08:30:22.159725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.139 [2024-11-28 08:30:22.173903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.139 [2024-11-28 08:30:22.173922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.139 [2024-11-28 08:30:22.189067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.139 [2024-11-28 08:30:22.189086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.204227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.204246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.217319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.217341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.233057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.233075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.248170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.248188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.262116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.262135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.277213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.277231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.292443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.292461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.308235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.308255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.320781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.320800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.335927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.335946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.349410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.349428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.364579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.364598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.380057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.380076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.140 [2024-11-28 08:30:22.392460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.140 [2024-11-28 08:30:22.392478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.406131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.406149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.421636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.421654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.437013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.437032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.451993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.452012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.465828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.465847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.481430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.481448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.496477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.496499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.511747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.511766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.526464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.526482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.541261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.541280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.556636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.556654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.572766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.572783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.588123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.588142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 16227.00 IOPS, 126.77 MiB/s [2024-11-28T07:30:22.668Z] [2024-11-28 08:30:22.601682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.601700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.617347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.617366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.632454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.632472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.644892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.644910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.399 [2024-11-28 08:30:22.660195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.399 [2024-11-28 08:30:22.660214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.671827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.671846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.686321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.686339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.701856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.701874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.716882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.716900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.731890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.731909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.746202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.746220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.761619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.761637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.776654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.776671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.792044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.792062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.804636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.804654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.817831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.817849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.832896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.832914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.847823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.847842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.862208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.862226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.877176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.877193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.892394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.892412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.908344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.908363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.659 [2024-11-28 08:30:22.923984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.659 [2024-11-28 08:30:22.924003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:22.936574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:22.936592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:22.951769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:22.951787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:22.963260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:22.963278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:22.978570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:22.978587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:22.993342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:22.993360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.008531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.008548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.024679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.024697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.040623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.040641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.052381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.052398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.065534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.065552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.081142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.081161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.096825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.096844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.111792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.111810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.125039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.125057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.140393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.140411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.155862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.155880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.919 [2024-11-28 08:30:23.169688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:40.919 [2024-11-28 08:30:23.169706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.185472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.185490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.200729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.200748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.216517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.216535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.228464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.228482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.242187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.242205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.257746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.257765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.273113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.273131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.287918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.287937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.300883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.300901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.315957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.315977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.326718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.326736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.341889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.341907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.357483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.357500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.372680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.372698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.387847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.387865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.399427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.399446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.414056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.414075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.429321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.429339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.179 [2024-11-28 08:30:23.444099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.179 [2024-11-28 08:30:23.444116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.455854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.455871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.470030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.470050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.485814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.485833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.500972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.500990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.516839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.516858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.531842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.531861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.546132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.546151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.561546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.561565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.576109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.576128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.586554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.586577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 16225.00 IOPS, 126.76 MiB/s [2024-11-28T07:30:23.708Z] [2024-11-28 08:30:23.602153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.602172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.617409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.617427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.632569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.632587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.647681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.647700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.659612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.659631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.673802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.673821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.689428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.689447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.439 [2024-11-28 08:30:23.705156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.439 [2024-11-28 08:30:23.705174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.720096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.720115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.731388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.731408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.745910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.745929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.761249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.761268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.776315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.776333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.790154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.790173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.805482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.805501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.820529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.820547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.836762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.836781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.851943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.851970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.864816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.864839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.879798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.879817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.894235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.894254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.909515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.909534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.923891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.923910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.938474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.938492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.699 [2024-11-28 08:30:23.953353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.699 [2024-11-28 08:30:23.953371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:23.968872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:23.968891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:23.984029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:23.984049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:23.994897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:23.994916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.009739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.009761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.025345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.025363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.040493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.040511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.056383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.056401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.072028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.072046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.085882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.085900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.101326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.101344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.112692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.112709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.128154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.128172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.142155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.142177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.157488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.157507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.172331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.172349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.187857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.187876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.201004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.201024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.216289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.216307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.989 [2024-11-28 08:30:24.230176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.989 [2024-11-28 08:30:24.230194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.245485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.245504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.260756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.260774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.275750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.275769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.290471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.290489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.305623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.305641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.321142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.321160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.336252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.336271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.349279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.349297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.363908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.363927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.378626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.378644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.393331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.393349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.408478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.408496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.424754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.424772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.440034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.440053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.451061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.451080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.465858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.465877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.480885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.480903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.496577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.496596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.511912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.511932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.285 [2024-11-28 08:30:24.525974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.285 [2024-11-28 08:30:24.525993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.556 [2024-11-28 08:30:24.541568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.556 [2024-11-28 08:30:24.541586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.556 [2024-11-28 08:30:24.556716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.556 [2024-11-28 08:30:24.556734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.556 [2024-11-28 08:30:24.572298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.556 [2024-11-28 08:30:24.572317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.556 [2024-11-28 08:30:24.582896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.556 [2024-11-28 08:30:24.582914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.556 [2024-11-28 08:30:24.597765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.556 [2024-11-28 08:30:24.597783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.556 16246.00 IOPS, 126.92 MiB/s [2024-11-28T07:30:24.825Z] [2024-11-28 08:30:24.613050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.556 [2024-11-28 08:30:24.613068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.556 [2024-11-28 08:30:24.627893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.556 [2024-11-28 08:30:24.627911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.556 [2024-11-28 08:30:24.640870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.556 [2024-11-28 08:30:24.640888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.556 [2024-11-28 08:30:24.656205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.556 [2024-11-28 08:30:24.656234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.556 [2024-11-28 08:30:24.670124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.557 [2024-11-28 08:30:24.670142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.557 [2024-11-28 08:30:24.685945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.557 [2024-11-28 08:30:24.685971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.557 [2024-11-28 08:30:24.701135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.557 [2024-11-28 08:30:24.701153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.557 [2024-11-28 08:30:24.716470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.557 [2024-11-28 08:30:24.716487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.557 [2024-11-28 08:30:24.732229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.557 [2024-11-28 08:30:24.732248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.557 [2024-11-28 08:30:24.745411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.557 [2024-11-28 08:30:24.745429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.557 [2024-11-28 08:30:24.760747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.557 [2024-11-28 08:30:24.760765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.557 [2024-11-28 08:30:24.775985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.557 [2024-11-28 08:30:24.776004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.557 [2024-11-28 08:30:24.790087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.557 [2024-11-28 08:30:24.790105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.557 [2024-11-28 08:30:24.805371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.557 [2024-11-28 08:30:24.805390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.820684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.820702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.832683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.832701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.845698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.845716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.861093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.861111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.876710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.876728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.893424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.893441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.908986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.909004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.924530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.924548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.939988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.940007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.954217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.954236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.969265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.969283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.984553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.984571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:24.999567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:24.999586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:25.014430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:25.014449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:25.029811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:25.029829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:25.044966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:25.045000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:25.060351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:25.060369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:25.073151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:25.073169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:25.088553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:25.088571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:25.104185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:25.104203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:25.118081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:25.118099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.875 [2024-11-28 08:30:25.133351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.875 [2024-11-28 08:30:25.133370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.148565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.148583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.164136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.164155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.174961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.174981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.189511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.189530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.204662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.204680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.219960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.219979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.233992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.234011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.249614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.249637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.264685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.264702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.280061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.280080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.291735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.291754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.306319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.306337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.321935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.321962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.337721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.337740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.353058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.353076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.368754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.368773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.384213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.384231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.397733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.397751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.412565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.412583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.429150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.429167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.207 [2024-11-28 08:30:25.444767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.207 [2024-11-28 08:30:25.444786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.459915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.459934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.471441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.471460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.486155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.486173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.501582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.501601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.516765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.516783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.532230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.532254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.546128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.546148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.562031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.562050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.577217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.577235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.592502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.592520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 16223.50 IOPS, 126.75 MiB/s [2024-11-28T07:30:25.736Z] [2024-11-28 08:30:25.608011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.608029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.621946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.621969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.637172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.637190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.652317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.652334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.667916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.667934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.682251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.682270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.696813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.696831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.712825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.712843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.467 [2024-11-28 08:30:25.727984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.467 [2024-11-28 08:30:25.728003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.742704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.742722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.758113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.758143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.773535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.773553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.789269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.789287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.804134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.804152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.816675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.816697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.831884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.831903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.846107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.846125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.861188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.861206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.875736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.875755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.888665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.888683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.904194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.904212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.916550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.916568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.932053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.932071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.944387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.944406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.727 [2024-11-28 08:30:25.959667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.727 [2024-11-28 08:30:25.959686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.728 [2024-11-28 08:30:25.972386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.728 [2024-11-28 08:30:25.972403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.728 [2024-11-28 08:30:25.985897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.728 [2024-11-28 08:30:25.985915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.000841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.000859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.016534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.016552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.029091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.029108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.044063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.044082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.054917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.054935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.070445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.070464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.085443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.085461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.100272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.100290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.112121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.112149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.125956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.125973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.141602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.141620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.156944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.156968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.171811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.171830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.183456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.183475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.198258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.198276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.213047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.213065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.228274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.228292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.987 [2024-11-28 08:30:26.242140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.987 [2024-11-28 08:30:26.242157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.245 [2024-11-28 08:30:26.257431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.245 [2024-11-28 08:30:26.257449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.245 [2024-11-28 08:30:26.272424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.245 [2024-11-28 08:30:26.272442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.245 [2024-11-28 08:30:26.288148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.245 [2024-11-28 08:30:26.288167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.245 [2024-11-28 08:30:26.301801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.245 [2024-11-28 08:30:26.301819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.245 [2024-11-28 08:30:26.317267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.245 [2024-11-28 08:30:26.317285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.245 [2024-11-28 08:30:26.332090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.245 [2024-11-28 08:30:26.332109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.245 [2024-11-28 08:30:26.343176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.245 [2024-11-28 08:30:26.343194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.246 [2024-11-28 08:30:26.358644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.246 [2024-11-28 08:30:26.358663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.246 [2024-11-28 08:30:26.373164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.246 [2024-11-28 08:30:26.373183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.246 [2024-11-28 08:30:26.388285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.246 [2024-11-28 08:30:26.388304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.246 [2024-11-28 08:30:26.400460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.246 [2024-11-28 08:30:26.400479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.246 [2024-11-28 08:30:26.414121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.246 [2024-11-28 08:30:26.414140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.246 [2024-11-28 08:30:26.429569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.246 [2024-11-28 08:30:26.429589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.246 [2024-11-28 08:30:26.443954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.246 [2024-11-28 08:30:26.443974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.246 [2024-11-28 08:30:26.456996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.246 [2024-11-28 08:30:26.457014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.246 [2024-11-28 08:30:26.472031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.246 [2024-11-28 08:30:26.472051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.246 [2024-11-28 08:30:26.482770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.246 [2024-11-28 08:30:26.482789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.246 [2024-11-28 08:30:26.498277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.246 [2024-11-28 08:30:26.498296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.505 [2024-11-28 08:30:26.513419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.505 [2024-11-28 08:30:26.513438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.505 [2024-11-28 08:30:26.528719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.505 [2024-11-28 08:30:26.528737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.505 [2024-11-28 08:30:26.544080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.505 [2024-11-28 08:30:26.544099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.505 [2024-11-28 08:30:26.557982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.505 [2024-11-28 08:30:26.558002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.505 [2024-11-28 08:30:26.573486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.505 [2024-11-28 08:30:26.573505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.505 [2024-11-28 08:30:26.588763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.505 [2024-11-28 08:30:26.588781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.505 [2024-11-28 08:30:26.603721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.505 [2024-11-28 08:30:26.603740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.505 16237.80 IOPS, 126.86 MiB/s [2024-11-28T07:30:26.774Z] [2024-11-28 08:30:26.612037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.505 [2024-11-28 08:30:26.612059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.505 00:31:44.505 Latency(us) 00:31:44.505 [2024-11-28T07:30:26.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.505 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:44.505 Nvme1n1 : 5.01 16240.57 126.88 0.00 0.00 7874.10 1994.57 13848.04 00:31:44.505 [2024-11-28T07:30:26.774Z] =================================================================================================================== 00:31:44.505 [2024-11-28T07:30:26.774Z] Total : 16240.57 126.88 0.00 0.00 7874.10 1994.57 13848.04 00:31:44.505 [2024-11-28 08:30:26.624033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.624050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.636033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.636045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.648054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.648073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.660029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.660043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.672034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.672047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.684030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.684043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.696049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.696063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.708032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.708048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.720044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.720058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.732027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.732037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.744030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.744042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.756029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.756040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.506 [2024-11-28 08:30:26.768026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.506 [2024-11-28 08:30:26.768036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1577494) - No such process 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1577494 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.765 delay0 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.765 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:44.765 [2024-11-28 08:30:26.841975] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:51.330 Initializing NVMe Controllers 00:31:51.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:51.330 Initialization complete. Launching workers. 00:31:51.330 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1125 00:31:51.330 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1402, failed to submit 43 00:31:51.330 success 1269, unsuccessful 133, failed 0 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:51.330 rmmod nvme_tcp 00:31:51.330 rmmod nvme_fabrics 00:31:51.330 rmmod nvme_keyring 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1575651 ']' 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1575651 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1575651 ']' 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1575651 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1575651 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1575651' 00:31:51.330 killing process with pid 1575651 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1575651 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1575651 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.330 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.866 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:53.866 00:31:53.866 real 0m30.831s 00:31:53.866 user 0m40.534s 00:31:53.866 sys 0m11.860s 00:31:53.866 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.866 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:53.866 ************************************ 00:31:53.866 END TEST nvmf_zcopy 00:31:53.866 ************************************ 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:53.867 ************************************ 00:31:53.867 START TEST nvmf_nmic 00:31:53.867 ************************************ 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:53.867 * Looking for test storage... 00:31:53.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:53.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.867 --rc genhtml_branch_coverage=1 00:31:53.867 --rc genhtml_function_coverage=1 00:31:53.867 --rc genhtml_legend=1 00:31:53.867 --rc geninfo_all_blocks=1 00:31:53.867 --rc geninfo_unexecuted_blocks=1 00:31:53.867 00:31:53.867 ' 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:53.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.867 --rc genhtml_branch_coverage=1 00:31:53.867 --rc genhtml_function_coverage=1 00:31:53.867 --rc genhtml_legend=1 00:31:53.867 --rc geninfo_all_blocks=1 00:31:53.867 --rc geninfo_unexecuted_blocks=1 00:31:53.867 00:31:53.867 ' 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:53.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.867 --rc genhtml_branch_coverage=1 00:31:53.867 --rc genhtml_function_coverage=1 00:31:53.867 --rc genhtml_legend=1 00:31:53.867 --rc geninfo_all_blocks=1 00:31:53.867 --rc geninfo_unexecuted_blocks=1 00:31:53.867 00:31:53.867 ' 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:53.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.867 --rc genhtml_branch_coverage=1 00:31:53.867 --rc genhtml_function_coverage=1 00:31:53.867 --rc genhtml_legend=1 00:31:53.867 --rc geninfo_all_blocks=1 00:31:53.867 --rc geninfo_unexecuted_blocks=1 00:31:53.867 00:31:53.867 ' 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:53.867 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:53.868 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:59.143 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:59.143 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:59.143 Found net devices under 0000:86:00.0: cvl_0_0 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:59.143 Found net devices under 0000:86:00.1: cvl_0_1 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.143 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.403 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.403 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.403 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.403 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:31:59.403 00:31:59.403 --- 10.0.0.2 ping statistics --- 00:31:59.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.403 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:31:59.403 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:31:59.403 00:31:59.403 --- 10.0.0.1 ping statistics --- 00:31:59.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.403 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:31:59.403 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1582861 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1582861 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1582861 ']' 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.404 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.404 [2024-11-28 08:30:41.609421] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.404 [2024-11-28 08:30:41.610411] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:31:59.404 [2024-11-28 08:30:41.610469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.664 [2024-11-28 08:30:41.679264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:59.664 [2024-11-28 08:30:41.723293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.664 [2024-11-28 08:30:41.723330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.664 [2024-11-28 08:30:41.723337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.664 [2024-11-28 08:30:41.723345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.664 [2024-11-28 08:30:41.723350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.664 [2024-11-28 08:30:41.724778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.664 [2024-11-28 08:30:41.724875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.664 [2024-11-28 08:30:41.724939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.664 [2024-11-28 08:30:41.724940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.664 [2024-11-28 08:30:41.793368] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:59.664 [2024-11-28 08:30:41.793458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:59.664 [2024-11-28 08:30:41.793626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:59.664 [2024-11-28 08:30:41.793903] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.664 [2024-11-28 08:30:41.794090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.664 [2024-11-28 08:30:41.861703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.664 Malloc0 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.664 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.665 [2024-11-28 08:30:41.917670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:59.665 test case1: single bdev can't be used in multiple subsystems 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.665 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.924 [2024-11-28 08:30:41.941393] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:59.924 [2024-11-28 08:30:41.941414] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:59.924 [2024-11-28 08:30:41.941421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.924 request: 00:31:59.924 { 00:31:59.924 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:59.924 "namespace": { 00:31:59.924 "bdev_name": "Malloc0", 00:31:59.924 "no_auto_visible": false, 00:31:59.924 "hide_metadata": false 00:31:59.924 }, 00:31:59.924 "method": "nvmf_subsystem_add_ns", 00:31:59.924 "req_id": 1 00:31:59.924 } 00:31:59.924 Got JSON-RPC error response 00:31:59.924 response: 00:31:59.924 { 00:31:59.924 "code": -32602, 00:31:59.924 "message": "Invalid parameters" 00:31:59.924 } 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:59.924 Adding namespace failed - expected result. 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:59.924 test case2: host connect to nvmf target in multiple paths 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:59.924 [2024-11-28 08:30:41.953493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.924 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:00.184 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:00.443 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:00.443 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:00.443 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:00.443 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:00.443 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:02.349 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:02.349 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:02.349 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:02.349 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:02.349 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:02.349 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:02.349 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:02.349 [global] 00:32:02.349 thread=1 00:32:02.349 invalidate=1 00:32:02.349 rw=write 00:32:02.349 time_based=1 00:32:02.349 runtime=1 00:32:02.349 ioengine=libaio 00:32:02.349 direct=1 00:32:02.349 bs=4096 00:32:02.349 iodepth=1 00:32:02.349 norandommap=0 00:32:02.349 numjobs=1 00:32:02.349 00:32:02.349 verify_dump=1 00:32:02.349 verify_backlog=512 00:32:02.349 verify_state_save=0 00:32:02.349 do_verify=1 00:32:02.349 verify=crc32c-intel 00:32:02.349 [job0] 00:32:02.349 filename=/dev/nvme0n1 00:32:02.349 Could not set queue depth (nvme0n1) 00:32:02.608 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:02.608 fio-3.35 00:32:02.608 Starting 1 thread 00:32:03.983 00:32:03.983 job0: (groupid=0, jobs=1): err= 0: pid=1583478: Thu Nov 28 08:30:45 2024 00:32:03.983 read: IOPS=22, BW=89.8KiB/s (92.0kB/s)(92.0KiB/1024msec) 00:32:03.983 slat (nsec): min=9875, max=25643, avg=22241.87, stdev=2810.37 00:32:03.983 clat (usec): min=40873, max=41296, avg=40981.96, stdev=85.60 00:32:03.983 lat (usec): min=40895, max=41306, avg=41004.20, stdev=83.49 00:32:03.983 clat percentiles (usec): 00:32:03.983 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:03.983 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:03.983 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:03.983 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:03.983 | 99.99th=[41157] 00:32:03.983 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:32:03.983 slat (nsec): min=9181, max=40175, avg=10269.28, stdev=1707.08 00:32:03.983 clat (usec): min=134, max=379, avg=144.32, stdev=11.22 00:32:03.983 lat (usec): min=144, max=420, avg=154.59, stdev=12.54 00:32:03.983 clat percentiles (usec): 00:32:03.983 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 141], 00:32:03.983 | 30.00th=[ 143], 40.00th=[ 143], 50.00th=[ 143], 60.00th=[ 145], 00:32:03.983 | 70.00th=[ 147], 80.00th=[ 147], 90.00th=[ 149], 95.00th=[ 151], 00:32:03.983 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 379], 99.95th=[ 379], 00:32:03.983 | 99.99th=[ 379] 00:32:03.983 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:32:03.983 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:32:03.983 lat (usec) : 250=95.51%, 500=0.19% 00:32:03.983 lat (msec) : 50=4.30% 00:32:03.983 cpu : usr=0.29%, sys=0.49%, ctx=535, majf=0, minf=1 00:32:03.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.983 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:03.983 00:32:03.983 Run status group 0 (all jobs): 00:32:03.984 READ: bw=89.8KiB/s (92.0kB/s), 89.8KiB/s-89.8KiB/s (92.0kB/s-92.0kB/s), io=92.0KiB (94.2kB), run=1024-1024msec 00:32:03.984 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=2048KiB (2097kB), run=1024-1024msec 00:32:03.984 00:32:03.984 Disk stats (read/write): 00:32:03.984 nvme0n1: ios=68/512, merge=0/0, ticks=880/69, in_queue=949, util=95.39% 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:03.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:03.984 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:03.984 rmmod nvme_tcp 00:32:03.984 rmmod nvme_fabrics 00:32:03.984 rmmod nvme_keyring 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1582861 ']' 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1582861 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1582861 ']' 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1582861 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1582861 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1582861' 00:32:04.242 killing process with pid 1582861 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1582861 00:32:04.242 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1582861 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.501 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.405 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.405 00:32:06.405 real 0m12.965s 00:32:06.405 user 0m24.050s 00:32:06.405 sys 0m5.892s 00:32:06.405 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.405 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:06.405 ************************************ 00:32:06.405 END TEST nvmf_nmic 00:32:06.405 ************************************ 00:32:06.405 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:06.405 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:06.405 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.405 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:06.405 ************************************ 00:32:06.405 START TEST nvmf_fio_target 00:32:06.405 ************************************ 00:32:06.405 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:06.665 * Looking for test storage... 00:32:06.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:06.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.665 --rc genhtml_branch_coverage=1 00:32:06.665 --rc genhtml_function_coverage=1 00:32:06.665 --rc genhtml_legend=1 00:32:06.665 --rc geninfo_all_blocks=1 00:32:06.665 --rc geninfo_unexecuted_blocks=1 00:32:06.665 00:32:06.665 ' 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:06.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.665 --rc genhtml_branch_coverage=1 00:32:06.665 --rc genhtml_function_coverage=1 00:32:06.665 --rc genhtml_legend=1 00:32:06.665 --rc geninfo_all_blocks=1 00:32:06.665 --rc geninfo_unexecuted_blocks=1 00:32:06.665 00:32:06.665 ' 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:06.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.665 --rc genhtml_branch_coverage=1 00:32:06.665 --rc genhtml_function_coverage=1 00:32:06.665 --rc genhtml_legend=1 00:32:06.665 --rc geninfo_all_blocks=1 00:32:06.665 --rc geninfo_unexecuted_blocks=1 00:32:06.665 00:32:06.665 ' 00:32:06.665 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:06.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.666 --rc genhtml_branch_coverage=1 00:32:06.666 --rc genhtml_function_coverage=1 00:32:06.666 --rc genhtml_legend=1 00:32:06.666 --rc geninfo_all_blocks=1 00:32:06.666 --rc geninfo_unexecuted_blocks=1 00:32:06.666 00:32:06.666 ' 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:06.666 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:11.939 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:11.939 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.939 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:11.940 Found net devices under 0000:86:00.0: cvl_0_0 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:11.940 Found net devices under 0000:86:00.1: cvl_0_1 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:11.940 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.199 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.199 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.199 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:12.199 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:12.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:32:12.199 00:32:12.199 --- 10.0.0.2 ping statistics --- 00:32:12.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.199 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:32:12.199 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:32:12.199 00:32:12.199 --- 10.0.0.1 ping statistics --- 00:32:12.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.199 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:32:12.199 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.199 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:12.199 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1587229 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1587229 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1587229 ']' 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:12.200 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:12.200 [2024-11-28 08:30:54.359614] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:12.200 [2024-11-28 08:30:54.360625] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:32:12.200 [2024-11-28 08:30:54.360667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.200 [2024-11-28 08:30:54.428060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:12.459 [2024-11-28 08:30:54.472832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.459 [2024-11-28 08:30:54.472866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.459 [2024-11-28 08:30:54.472873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.459 [2024-11-28 08:30:54.472879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.459 [2024-11-28 08:30:54.472884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.459 [2024-11-28 08:30:54.474255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.459 [2024-11-28 08:30:54.474274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:12.459 [2024-11-28 08:30:54.474367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:12.459 [2024-11-28 08:30:54.474369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.459 [2024-11-28 08:30:54.544041] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:12.459 [2024-11-28 08:30:54.544157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:12.459 [2024-11-28 08:30:54.544286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:12.459 [2024-11-28 08:30:54.544514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:12.459 [2024-11-28 08:30:54.544697] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:12.459 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.459 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:12.459 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:12.459 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:12.459 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:12.459 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.459 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:12.720 [2024-11-28 08:30:54.774935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.720 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:12.981 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:12.981 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:12.981 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:12.981 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:13.239 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:13.239 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:13.498 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:13.498 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:13.757 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:14.016 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:14.016 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:14.016 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:14.016 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:14.275 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:14.275 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:14.534 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:14.793 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:14.793 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:14.793 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:14.793 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:15.052 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:15.311 [2024-11-28 08:30:57.431014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.311 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:15.570 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:15.828 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:15.828 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:15.828 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:15.829 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:15.829 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:15.829 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:15.829 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:18.363 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:18.363 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:18.364 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:18.364 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:18.364 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:18.364 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:18.364 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:18.364 [global] 00:32:18.364 thread=1 00:32:18.364 invalidate=1 00:32:18.364 rw=write 00:32:18.364 time_based=1 00:32:18.364 runtime=1 00:32:18.364 ioengine=libaio 00:32:18.364 direct=1 00:32:18.364 bs=4096 00:32:18.364 iodepth=1 00:32:18.364 norandommap=0 00:32:18.364 numjobs=1 00:32:18.364 00:32:18.364 verify_dump=1 00:32:18.364 verify_backlog=512 00:32:18.364 verify_state_save=0 00:32:18.364 do_verify=1 00:32:18.364 verify=crc32c-intel 00:32:18.364 [job0] 00:32:18.364 filename=/dev/nvme0n1 00:32:18.364 [job1] 00:32:18.364 filename=/dev/nvme0n2 00:32:18.364 [job2] 00:32:18.364 filename=/dev/nvme0n3 00:32:18.364 [job3] 00:32:18.364 filename=/dev/nvme0n4 00:32:18.364 Could not set queue depth (nvme0n1) 00:32:18.364 Could not set queue depth (nvme0n2) 00:32:18.364 Could not set queue depth (nvme0n3) 00:32:18.364 Could not set queue depth (nvme0n4) 00:32:18.364 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:18.364 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:18.364 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:18.364 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:18.364 fio-3.35 00:32:18.364 Starting 4 threads 00:32:19.742 00:32:19.742 job0: (groupid=0, jobs=1): err= 0: pid=1588348: Thu Nov 28 08:31:01 2024 00:32:19.742 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:32:19.742 slat (nsec): min=6516, max=42543, avg=8309.79, stdev=1730.97 00:32:19.742 clat (usec): min=232, max=41194, avg=434.94, stdev=2541.39 00:32:19.742 lat (usec): min=240, max=41204, avg=443.25, stdev=2542.19 00:32:19.742 clat percentiles (usec): 00:32:19.742 | 1.00th=[ 245], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 255], 00:32:19.742 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:32:19.742 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 330], 95.00th=[ 379], 00:32:19.742 | 99.00th=[ 404], 99.50th=[ 412], 99.90th=[41157], 99.95th=[41157], 00:32:19.742 | 99.99th=[41157] 00:32:19.742 write: IOPS=1590, BW=6362KiB/s (6514kB/s)(6368KiB/1001msec); 0 zone resets 00:32:19.742 slat (nsec): min=4756, max=40363, avg=11608.81, stdev=2138.95 00:32:19.742 clat (usec): min=153, max=282, avg=182.99, stdev=14.29 00:32:19.742 lat (usec): min=161, max=307, avg=194.60, stdev=14.69 00:32:19.742 clat percentiles (usec): 00:32:19.742 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:32:19.742 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:32:19.742 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 210], 00:32:19.742 | 99.00th=[ 233], 99.50th=[ 243], 99.90th=[ 273], 99.95th=[ 285], 00:32:19.742 | 99.99th=[ 285] 00:32:19.742 bw ( KiB/s): min= 4600, max= 4600, per=21.20%, avg=4600.00, stdev= 0.00, samples=1 00:32:19.742 iops : min= 1150, max= 1150, avg=1150.00, stdev= 0.00, samples=1 00:32:19.742 lat (usec) : 250=54.19%, 500=45.62% 00:32:19.743 lat (msec) : 50=0.19% 00:32:19.743 cpu : usr=2.50%, sys=4.70%, ctx=3131, majf=0, minf=1 00:32:19.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:19.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.743 issued rwts: total=1536,1592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:19.743 job1: (groupid=0, jobs=1): err= 0: pid=1588349: Thu Nov 28 08:31:01 2024 00:32:19.743 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:32:19.743 slat (nsec): min=9778, max=26429, avg=20050.14, stdev=4960.18 00:32:19.743 clat (usec): min=40683, max=41103, avg=40955.01, stdev=85.01 00:32:19.743 lat (usec): min=40692, max=41118, avg=40975.06, stdev=86.99 00:32:19.743 clat percentiles (usec): 00:32:19.743 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:19.743 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:19.743 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:19.743 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:19.743 | 99.99th=[41157] 00:32:19.743 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:32:19.743 slat (nsec): min=9022, max=39535, avg=10013.27, stdev=1488.02 00:32:19.743 clat (usec): min=154, max=350, avg=181.80, stdev=14.18 00:32:19.743 lat (usec): min=164, max=389, avg=191.81, stdev=14.89 00:32:19.743 clat percentiles (usec): 00:32:19.743 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 172], 00:32:19.743 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:32:19.743 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:32:19.743 | 99.00th=[ 217], 99.50th=[ 233], 99.90th=[ 351], 99.95th=[ 351], 00:32:19.743 | 99.99th=[ 351] 00:32:19.743 bw ( KiB/s): min= 4096, max= 4096, per=18.87%, avg=4096.00, stdev= 0.00, samples=1 00:32:19.743 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:19.743 lat (usec) : 250=95.51%, 500=0.37% 00:32:19.743 lat (msec) : 50=4.12% 00:32:19.743 cpu : usr=0.20%, sys=0.50%, ctx=534, majf=0, minf=2 00:32:19.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:19.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.743 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:19.743 job2: (groupid=0, jobs=1): err= 0: pid=1588350: Thu Nov 28 08:31:01 2024 00:32:19.743 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:19.743 slat (nsec): min=7405, max=36279, avg=8703.51, stdev=1707.97 00:32:19.743 clat (usec): min=193, max=41130, avg=265.14, stdev=903.53 00:32:19.743 lat (usec): min=201, max=41139, avg=273.85, stdev=903.53 00:32:19.743 clat percentiles (usec): 00:32:19.743 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:32:19.743 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:32:19.743 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 265], 00:32:19.743 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 429], 99.95th=[ 461], 00:32:19.743 | 99.99th=[41157] 00:32:19.743 write: IOPS=2300, BW=9203KiB/s (9424kB/s)(9212KiB/1001msec); 0 zone resets 00:32:19.743 slat (nsec): min=6722, max=38260, avg=12118.30, stdev=2957.75 00:32:19.743 clat (usec): min=138, max=352, avg=172.74, stdev=19.87 00:32:19.743 lat (usec): min=149, max=382, avg=184.86, stdev=20.32 00:32:19.743 clat percentiles (usec): 00:32:19.743 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 157], 00:32:19.743 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:32:19.743 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 210], 00:32:19.743 | 99.00th=[ 229], 99.50th=[ 249], 99.90th=[ 306], 99.95th=[ 322], 00:32:19.743 | 99.99th=[ 355] 00:32:19.743 bw ( KiB/s): min= 8448, max= 8448, per=38.93%, avg=8448.00, stdev= 0.00, samples=1 00:32:19.743 iops : min= 2112, max= 2112, avg=2112.00, stdev= 0.00, samples=1 00:32:19.743 lat (usec) : 250=89.31%, 500=10.66% 00:32:19.743 lat (msec) : 50=0.02% 00:32:19.743 cpu : usr=3.90%, sys=6.70%, ctx=4353, majf=0, minf=1 00:32:19.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:19.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.743 issued rwts: total=2048,2303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:19.743 job3: (groupid=0, jobs=1): err= 0: pid=1588351: Thu Nov 28 08:31:01 2024 00:32:19.743 read: IOPS=755, BW=3021KiB/s (3093kB/s)(3024KiB/1001msec) 00:32:19.743 slat (nsec): min=8830, max=28745, avg=10081.38, stdev=2263.83 00:32:19.743 clat (usec): min=235, max=41128, avg=1023.59, stdev=5491.88 00:32:19.743 lat (usec): min=245, max=41139, avg=1033.67, stdev=5493.53 00:32:19.743 clat percentiles (usec): 00:32:19.743 | 1.00th=[ 243], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 255], 00:32:19.743 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 265], 60.00th=[ 273], 00:32:19.743 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 318], 00:32:19.743 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:19.743 | 99.99th=[41157] 00:32:19.743 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:32:19.743 slat (nsec): min=11317, max=80437, avg=15321.82, stdev=6293.36 00:32:19.743 clat (usec): min=141, max=365, avg=192.24, stdev=18.59 00:32:19.743 lat (usec): min=153, max=378, avg=207.56, stdev=20.34 00:32:19.743 clat percentiles (usec): 00:32:19.743 | 1.00th=[ 151], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:32:19.743 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:32:19.743 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 223], 00:32:19.743 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 359], 99.95th=[ 367], 00:32:19.743 | 99.99th=[ 367] 00:32:19.743 bw ( KiB/s): min= 4096, max= 4096, per=18.87%, avg=4096.00, stdev= 0.00, samples=1 00:32:19.743 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:19.743 lat (usec) : 250=60.84%, 500=38.37% 00:32:19.743 lat (msec) : 50=0.79% 00:32:19.743 cpu : usr=1.30%, sys=3.60%, ctx=1781, majf=0, minf=1 00:32:19.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:19.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.743 issued rwts: total=756,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:19.743 00:32:19.743 Run status group 0 (all jobs): 00:32:19.743 READ: bw=17.0MiB/s (17.8MB/s), 87.9KiB/s-8184KiB/s (90.0kB/s-8380kB/s), io=17.0MiB (17.9MB), run=1001-1001msec 00:32:19.743 WRITE: bw=21.2MiB/s (22.2MB/s), 2046KiB/s-9203KiB/s (2095kB/s-9424kB/s), io=21.2MiB (22.2MB), run=1001-1001msec 00:32:19.743 00:32:19.743 Disk stats (read/write): 00:32:19.743 nvme0n1: ios=1150/1536, merge=0/0, ticks=708/255, in_queue=963, util=85.67% 00:32:19.743 nvme0n2: ios=68/512, merge=0/0, ticks=807/96, in_queue=903, util=90.96% 00:32:19.743 nvme0n3: ios=1696/2048, merge=0/0, ticks=1333/317, in_queue=1650, util=93.44% 00:32:19.743 nvme0n4: ios=537/660, merge=0/0, ticks=1580/115, in_queue=1695, util=94.22% 00:32:19.743 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:19.743 [global] 00:32:19.743 thread=1 00:32:19.743 invalidate=1 00:32:19.743 rw=randwrite 00:32:19.743 time_based=1 00:32:19.743 runtime=1 00:32:19.743 ioengine=libaio 00:32:19.743 direct=1 00:32:19.743 bs=4096 00:32:19.743 iodepth=1 00:32:19.743 norandommap=0 00:32:19.743 numjobs=1 00:32:19.743 00:32:19.743 verify_dump=1 00:32:19.743 verify_backlog=512 00:32:19.743 verify_state_save=0 00:32:19.743 do_verify=1 00:32:19.743 verify=crc32c-intel 00:32:19.743 [job0] 00:32:19.743 filename=/dev/nvme0n1 00:32:19.743 [job1] 00:32:19.743 filename=/dev/nvme0n2 00:32:19.743 [job2] 00:32:19.743 filename=/dev/nvme0n3 00:32:19.743 [job3] 00:32:19.743 filename=/dev/nvme0n4 00:32:19.743 Could not set queue depth (nvme0n1) 00:32:19.743 Could not set queue depth (nvme0n2) 00:32:19.743 Could not set queue depth (nvme0n3) 00:32:19.743 Could not set queue depth (nvme0n4) 00:32:19.743 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:19.743 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:19.743 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:19.743 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:19.743 fio-3.35 00:32:19.743 Starting 4 threads 00:32:21.121 00:32:21.121 job0: (groupid=0, jobs=1): err= 0: pid=1588717: Thu Nov 28 08:31:03 2024 00:32:21.121 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:32:21.121 slat (nsec): min=11897, max=35308, avg=17837.73, stdev=5695.59 00:32:21.121 clat (usec): min=40893, max=42043, avg=41124.32, stdev=367.61 00:32:21.121 lat (usec): min=40911, max=42064, avg=41142.16, stdev=367.64 00:32:21.121 clat percentiles (usec): 00:32:21.121 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:21.121 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:21.121 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:32:21.121 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:21.121 | 99.99th=[42206] 00:32:21.121 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:32:21.121 slat (nsec): min=10371, max=34770, avg=12792.57, stdev=1712.07 00:32:21.122 clat (usec): min=163, max=320, avg=197.19, stdev=17.63 00:32:21.122 lat (usec): min=175, max=332, avg=209.98, stdev=17.90 00:32:21.122 clat percentiles (usec): 00:32:21.122 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:32:21.122 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:32:21.122 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 229], 00:32:21.122 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 322], 99.95th=[ 322], 00:32:21.122 | 99.99th=[ 322] 00:32:21.122 bw ( KiB/s): min= 4096, max= 4096, per=26.35%, avg=4096.00, stdev= 0.00, samples=1 00:32:21.122 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:21.122 lat (usec) : 250=94.38%, 500=1.50% 00:32:21.122 lat (msec) : 50=4.12% 00:32:21.122 cpu : usr=0.30%, sys=1.08%, ctx=534, majf=0, minf=1 00:32:21.122 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.122 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.122 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:21.122 job1: (groupid=0, jobs=1): err= 0: pid=1588718: Thu Nov 28 08:31:03 2024 00:32:21.122 read: IOPS=22, BW=90.9KiB/s (93.1kB/s)(92.0KiB/1012msec) 00:32:21.122 slat (nsec): min=10181, max=52212, avg=22161.74, stdev=7811.22 00:32:21.122 clat (usec): min=287, max=41984, avg=39251.28, stdev=8497.48 00:32:21.122 lat (usec): min=301, max=42004, avg=39273.45, stdev=8499.13 00:32:21.122 clat percentiles (usec): 00:32:21.122 | 1.00th=[ 289], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:21.122 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:21.122 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:21.122 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:21.122 | 99.99th=[42206] 00:32:21.122 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:32:21.122 slat (nsec): min=10031, max=42113, avg=12717.73, stdev=2871.64 00:32:21.122 clat (usec): min=150, max=394, avg=194.94, stdev=21.33 00:32:21.122 lat (usec): min=162, max=432, avg=207.66, stdev=21.93 00:32:21.122 clat percentiles (usec): 00:32:21.122 | 1.00th=[ 155], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:32:21.122 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:32:21.122 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 229], 00:32:21.122 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 396], 99.95th=[ 396], 00:32:21.122 | 99.99th=[ 396] 00:32:21.122 bw ( KiB/s): min= 4096, max= 4096, per=26.35%, avg=4096.00, stdev= 0.00, samples=1 00:32:21.122 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:21.122 lat (usec) : 250=93.27%, 500=2.62% 00:32:21.122 lat (msec) : 50=4.11% 00:32:21.122 cpu : usr=0.20%, sys=1.19%, ctx=535, majf=0, minf=1 00:32:21.122 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.122 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.122 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:21.122 job2: (groupid=0, jobs=1): err= 0: pid=1588719: Thu Nov 28 08:31:03 2024 00:32:21.122 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:21.122 slat (nsec): min=5765, max=32133, avg=7433.55, stdev=1204.72 00:32:21.122 clat (usec): min=196, max=41010, avg=277.60, stdev=1308.69 00:32:21.122 lat (usec): min=204, max=41018, avg=285.03, stdev=1308.80 00:32:21.122 clat percentiles (usec): 00:32:21.122 | 1.00th=[ 200], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:32:21.122 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 233], 60.00th=[ 245], 00:32:21.122 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 260], 00:32:21.122 | 99.00th=[ 293], 99.50th=[ 343], 99.90th=[14353], 99.95th=[40633], 00:32:21.122 | 99.99th=[41157] 00:32:21.122 write: IOPS=2406, BW=9626KiB/s (9857kB/s)(9636KiB/1001msec); 0 zone resets 00:32:21.122 slat (nsec): min=9092, max=35621, avg=10145.67, stdev=1235.70 00:32:21.122 clat (usec): min=135, max=988, avg=158.91, stdev=29.61 00:32:21.122 lat (usec): min=145, max=998, avg=169.06, stdev=29.70 00:32:21.122 clat percentiles (usec): 00:32:21.122 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 143], 20.00th=[ 145], 00:32:21.122 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 149], 00:32:21.122 | 70.00th=[ 153], 80.00th=[ 180], 90.00th=[ 198], 95.00th=[ 208], 00:32:21.122 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 343], 99.95th=[ 343], 00:32:21.122 | 99.99th=[ 988] 00:32:21.122 bw ( KiB/s): min= 8192, max= 8192, per=52.69%, avg=8192.00, stdev= 0.00, samples=1 00:32:21.122 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:21.122 lat (usec) : 250=91.83%, 500=8.05%, 750=0.02%, 1000=0.02% 00:32:21.122 lat (msec) : 20=0.02%, 50=0.04% 00:32:21.122 cpu : usr=1.60%, sys=4.50%, ctx=4457, majf=0, minf=1 00:32:21.122 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.122 issued rwts: total=2048,2409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.122 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:21.122 job3: (groupid=0, jobs=1): err= 0: pid=1588720: Thu Nov 28 08:31:03 2024 00:32:21.122 read: IOPS=185, BW=741KiB/s (759kB/s)(752KiB/1015msec) 00:32:21.122 slat (nsec): min=6834, max=26131, avg=9508.68, stdev=4879.98 00:32:21.122 clat (usec): min=226, max=42034, avg=4823.94, stdev=12905.20 00:32:21.122 lat (usec): min=233, max=42056, avg=4833.45, stdev=12909.46 00:32:21.122 clat percentiles (usec): 00:32:21.122 | 1.00th=[ 231], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 249], 00:32:21.122 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 260], 00:32:21.122 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[41157], 95.00th=[41157], 00:32:21.122 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:21.122 | 99.99th=[42206] 00:32:21.122 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:32:21.122 slat (nsec): min=7857, max=34759, avg=10522.63, stdev=1813.91 00:32:21.122 clat (usec): min=160, max=298, avg=192.41, stdev=15.39 00:32:21.122 lat (usec): min=170, max=332, avg=202.94, stdev=15.99 00:32:21.122 clat percentiles (usec): 00:32:21.122 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:32:21.122 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 192], 00:32:21.122 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 217], 00:32:21.122 | 99.00th=[ 235], 99.50th=[ 289], 99.90th=[ 297], 99.95th=[ 297], 00:32:21.122 | 99.99th=[ 297] 00:32:21.122 bw ( KiB/s): min= 4096, max= 4096, per=26.35%, avg=4096.00, stdev= 0.00, samples=1 00:32:21.122 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:21.122 lat (usec) : 250=80.71%, 500=16.29% 00:32:21.122 lat (msec) : 50=3.00% 00:32:21.122 cpu : usr=0.59%, sys=0.39%, ctx=703, majf=0, minf=1 00:32:21.122 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.122 issued rwts: total=188,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.122 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:21.122 00:32:21.122 Run status group 0 (all jobs): 00:32:21.122 READ: bw=8989KiB/s (9205kB/s), 86.7KiB/s-8184KiB/s (88.8kB/s-8380kB/s), io=9124KiB (9343kB), run=1001-1015msec 00:32:21.122 WRITE: bw=15.2MiB/s (15.9MB/s), 2018KiB/s-9626KiB/s (2066kB/s-9857kB/s), io=15.4MiB (16.2MB), run=1001-1015msec 00:32:21.122 00:32:21.122 Disk stats (read/write): 00:32:21.123 nvme0n1: ios=58/512, merge=0/0, ticks=773/100, in_queue=873, util=86.57% 00:32:21.123 nvme0n2: ios=18/512, merge=0/0, ticks=740/92, in_queue=832, util=86.78% 00:32:21.123 nvme0n3: ios=1666/2048, merge=0/0, ticks=482/323, in_queue=805, util=88.95% 00:32:21.123 nvme0n4: ios=213/512, merge=0/0, ticks=1212/100, in_queue=1312, util=98.63% 00:32:21.123 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:21.123 [global] 00:32:21.123 thread=1 00:32:21.123 invalidate=1 00:32:21.123 rw=write 00:32:21.123 time_based=1 00:32:21.123 runtime=1 00:32:21.123 ioengine=libaio 00:32:21.123 direct=1 00:32:21.123 bs=4096 00:32:21.123 iodepth=128 00:32:21.123 norandommap=0 00:32:21.123 numjobs=1 00:32:21.123 00:32:21.123 verify_dump=1 00:32:21.123 verify_backlog=512 00:32:21.123 verify_state_save=0 00:32:21.123 do_verify=1 00:32:21.123 verify=crc32c-intel 00:32:21.123 [job0] 00:32:21.123 filename=/dev/nvme0n1 00:32:21.123 [job1] 00:32:21.123 filename=/dev/nvme0n2 00:32:21.123 [job2] 00:32:21.123 filename=/dev/nvme0n3 00:32:21.123 [job3] 00:32:21.123 filename=/dev/nvme0n4 00:32:21.123 Could not set queue depth (nvme0n1) 00:32:21.123 Could not set queue depth (nvme0n2) 00:32:21.123 Could not set queue depth (nvme0n3) 00:32:21.123 Could not set queue depth (nvme0n4) 00:32:21.381 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:21.381 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:21.381 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:21.381 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:21.381 fio-3.35 00:32:21.381 Starting 4 threads 00:32:22.761 00:32:22.761 job0: (groupid=0, jobs=1): err= 0: pid=1589093: Thu Nov 28 08:31:04 2024 00:32:22.761 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec) 00:32:22.761 slat (nsec): min=1501, max=15821k, avg=141896.97, stdev=1033742.07 00:32:22.761 clat (usec): min=3266, max=57743, avg=16741.52, stdev=9580.92 00:32:22.761 lat (usec): min=3277, max=57745, avg=16883.41, stdev=9669.08 00:32:22.761 clat percentiles (usec): 00:32:22.761 | 1.00th=[ 6652], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9372], 00:32:22.761 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[12780], 60.00th=[16909], 00:32:22.761 | 70.00th=[19530], 80.00th=[22414], 90.00th=[30016], 95.00th=[33424], 00:32:22.761 | 99.00th=[54789], 99.50th=[56886], 99.90th=[57934], 99.95th=[57934], 00:32:22.761 | 99.99th=[57934] 00:32:22.761 write: IOPS=3824, BW=14.9MiB/s (15.7MB/s)(15.1MiB/1014msec); 0 zone resets 00:32:22.761 slat (usec): min=2, max=24340, avg=121.21, stdev=836.71 00:32:22.761 clat (usec): min=1548, max=57744, avg=17690.23, stdev=8525.02 00:32:22.761 lat (usec): min=1561, max=57748, avg=17811.44, stdev=8560.68 00:32:22.761 clat percentiles (usec): 00:32:22.761 | 1.00th=[ 4424], 5.00th=[ 7242], 10.00th=[ 8094], 20.00th=[11338], 00:32:22.761 | 30.00th=[14484], 40.00th=[16319], 50.00th=[17433], 60.00th=[17695], 00:32:22.761 | 70.00th=[17957], 80.00th=[20841], 90.00th=[28705], 95.00th=[32637], 00:32:22.761 | 99.00th=[50070], 99.50th=[50594], 99.90th=[56886], 99.95th=[57934], 00:32:22.761 | 99.99th=[57934] 00:32:22.761 bw ( KiB/s): min=13224, max=16784, per=23.77%, avg=15004.00, stdev=2517.30, samples=2 00:32:22.761 iops : min= 3306, max= 4196, avg=3751.00, stdev=629.33, samples=2 00:32:22.761 lat (msec) : 2=0.03%, 4=0.62%, 10=25.17%, 20=51.23%, 50=21.47% 00:32:22.761 lat (msec) : 100=1.49% 00:32:22.761 cpu : usr=2.27%, sys=4.64%, ctx=375, majf=0, minf=1 00:32:22.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:22.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:22.761 issued rwts: total=3584,3878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:22.761 job1: (groupid=0, jobs=1): err= 0: pid=1589094: Thu Nov 28 08:31:04 2024 00:32:22.761 read: IOPS=4504, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1006msec) 00:32:22.761 slat (nsec): min=1223, max=9973.9k, avg=103141.71, stdev=714517.79 00:32:22.761 clat (usec): min=1759, max=29288, avg=13513.54, stdev=5375.79 00:32:22.761 lat (usec): min=1763, max=29556, avg=13616.68, stdev=5422.42 00:32:22.761 clat percentiles (usec): 00:32:22.761 | 1.00th=[ 3490], 5.00th=[ 5276], 10.00th=[ 7373], 20.00th=[ 8586], 00:32:22.761 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[12780], 60.00th=[15139], 00:32:22.761 | 70.00th=[17171], 80.00th=[18220], 90.00th=[21365], 95.00th=[22414], 00:32:22.761 | 99.00th=[24511], 99.50th=[28181], 99.90th=[29230], 99.95th=[29230], 00:32:22.761 | 99.99th=[29230] 00:32:22.761 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:32:22.761 slat (usec): min=2, max=8983, avg=103.28, stdev=688.50 00:32:22.761 clat (usec): min=242, max=46391, avg=14346.64, stdev=9215.04 00:32:22.761 lat (usec): min=294, max=46397, avg=14449.92, stdev=9297.06 00:32:22.761 clat percentiles (usec): 00:32:22.761 | 1.00th=[ 1156], 5.00th=[ 4948], 10.00th=[ 6259], 20.00th=[ 8356], 00:32:22.761 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[12387], 60.00th=[14353], 00:32:22.761 | 70.00th=[16188], 80.00th=[16581], 90.00th=[30802], 95.00th=[38011], 00:32:22.761 | 99.00th=[42730], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:32:22.761 | 99.99th=[46400] 00:32:22.761 bw ( KiB/s): min=15816, max=21048, per=29.20%, avg=18432.00, stdev=3699.58, samples=2 00:32:22.761 iops : min= 3954, max= 5262, avg=4608.00, stdev=924.90, samples=2 00:32:22.761 lat (usec) : 250=0.01%, 500=0.02%, 750=0.14% 00:32:22.761 lat (msec) : 2=1.01%, 4=1.28%, 10=33.94%, 20=50.49%, 50=13.11% 00:32:22.761 cpu : usr=2.89%, sys=5.67%, ctx=305, majf=0, minf=1 00:32:22.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:22.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:22.761 issued rwts: total=4532,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:22.761 job2: (groupid=0, jobs=1): err= 0: pid=1589095: Thu Nov 28 08:31:04 2024 00:32:22.761 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:32:22.761 slat (nsec): min=1113, max=24578k, avg=120831.08, stdev=883682.43 00:32:22.761 clat (usec): min=8205, max=54753, avg=16981.00, stdev=6739.09 00:32:22.761 lat (usec): min=8211, max=54767, avg=17101.83, stdev=6777.44 00:32:22.761 clat percentiles (usec): 00:32:22.761 | 1.00th=[ 8455], 5.00th=[10159], 10.00th=[10683], 20.00th=[11600], 00:32:22.761 | 30.00th=[13435], 40.00th=[14484], 50.00th=[16188], 60.00th=[16909], 00:32:22.761 | 70.00th=[18482], 80.00th=[20317], 90.00th=[22938], 95.00th=[29230], 00:32:22.761 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:32:22.761 | 99.99th=[54789] 00:32:22.761 write: IOPS=4014, BW=15.7MiB/s (16.4MB/s)(15.8MiB/1008msec); 0 zone resets 00:32:22.761 slat (usec): min=2, max=28828, avg=133.19, stdev=902.05 00:32:22.761 clat (usec): min=4383, max=45299, avg=16448.65, stdev=7443.20 00:32:22.761 lat (usec): min=5906, max=45309, avg=16581.84, stdev=7525.45 00:32:22.761 clat percentiles (usec): 00:32:22.761 | 1.00th=[ 6652], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[11469], 00:32:22.761 | 30.00th=[11994], 40.00th=[13173], 50.00th=[14877], 60.00th=[16188], 00:32:22.761 | 70.00th=[16909], 80.00th=[18482], 90.00th=[23987], 95.00th=[36439], 00:32:22.761 | 99.00th=[41157], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:32:22.761 | 99.99th=[45351] 00:32:22.761 bw ( KiB/s): min=14968, max=16384, per=24.83%, avg=15676.00, stdev=1001.26, samples=2 00:32:22.761 iops : min= 3742, max= 4096, avg=3919.00, stdev=250.32, samples=2 00:32:22.761 lat (msec) : 10=5.41%, 20=77.04%, 50=17.53%, 100=0.01% 00:32:22.761 cpu : usr=3.48%, sys=5.76%, ctx=275, majf=0, minf=1 00:32:22.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:22.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:22.761 issued rwts: total=3584,4047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:22.761 job3: (groupid=0, jobs=1): err= 0: pid=1589096: Thu Nov 28 08:31:04 2024 00:32:22.761 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:32:22.762 slat (nsec): min=1402, max=17579k, avg=139654.00, stdev=1051660.16 00:32:22.762 clat (usec): min=4299, max=46254, avg=18663.45, stdev=8048.61 00:32:22.762 lat (usec): min=4305, max=46266, avg=18803.10, stdev=8097.34 00:32:22.762 clat percentiles (usec): 00:32:22.762 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10552], 00:32:22.762 | 30.00th=[11207], 40.00th=[16712], 50.00th=[17695], 60.00th=[19792], 00:32:22.762 | 70.00th=[22414], 80.00th=[24511], 90.00th=[29492], 95.00th=[34341], 00:32:22.762 | 99.00th=[45876], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:32:22.762 | 99.99th=[46400] 00:32:22.762 write: IOPS=3422, BW=13.4MiB/s (14.0MB/s)(13.6MiB/1014msec); 0 zone resets 00:32:22.762 slat (usec): min=2, max=28925, avg=159.26, stdev=1012.04 00:32:22.762 clat (usec): min=2422, max=68223, avg=20535.24, stdev=10826.69 00:32:22.762 lat (usec): min=2432, max=68229, avg=20694.50, stdev=10894.13 00:32:22.762 clat percentiles (usec): 00:32:22.762 | 1.00th=[ 5080], 5.00th=[ 9503], 10.00th=[11076], 20.00th=[14615], 00:32:22.762 | 30.00th=[16712], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:32:22.762 | 70.00th=[19006], 80.00th=[23462], 90.00th=[36963], 95.00th=[45351], 00:32:22.762 | 99.00th=[63701], 99.50th=[66323], 99.90th=[67634], 99.95th=[68682], 00:32:22.762 | 99.99th=[68682] 00:32:22.762 bw ( KiB/s): min=13224, max=13520, per=21.18%, avg=13372.00, stdev=209.30, samples=2 00:32:22.762 iops : min= 3306, max= 3380, avg=3343.00, stdev=52.33, samples=2 00:32:22.762 lat (msec) : 4=0.23%, 10=7.02%, 20=61.53%, 50=29.72%, 100=1.51% 00:32:22.762 cpu : usr=3.06%, sys=3.16%, ctx=364, majf=0, minf=1 00:32:22.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:22.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:22.762 issued rwts: total=3072,3470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:22.762 00:32:22.762 Run status group 0 (all jobs): 00:32:22.762 READ: bw=56.9MiB/s (59.7MB/s), 11.8MiB/s-17.6MiB/s (12.4MB/s-18.5MB/s), io=57.7MiB (60.5MB), run=1006-1014msec 00:32:22.762 WRITE: bw=61.6MiB/s (64.6MB/s), 13.4MiB/s-17.9MiB/s (14.0MB/s-18.8MB/s), io=62.5MiB (65.5MB), run=1006-1014msec 00:32:22.762 00:32:22.762 Disk stats (read/write): 00:32:22.762 nvme0n1: ios=3122/3191, merge=0/0, ticks=51370/53211, in_queue=104581, util=86.76% 00:32:22.762 nvme0n2: ios=3608/3817, merge=0/0, ticks=32234/32326, in_queue=64560, util=98.17% 00:32:22.762 nvme0n3: ios=3358/3584, merge=0/0, ticks=32935/27462, in_queue=60397, util=98.12% 00:32:22.762 nvme0n4: ios=2560/2783, merge=0/0, ticks=49893/54882, in_queue=104775, util=89.71% 00:32:22.762 08:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:22.762 [global] 00:32:22.762 thread=1 00:32:22.762 invalidate=1 00:32:22.762 rw=randwrite 00:32:22.762 time_based=1 00:32:22.762 runtime=1 00:32:22.762 ioengine=libaio 00:32:22.762 direct=1 00:32:22.762 bs=4096 00:32:22.762 iodepth=128 00:32:22.762 norandommap=0 00:32:22.762 numjobs=1 00:32:22.762 00:32:22.762 verify_dump=1 00:32:22.762 verify_backlog=512 00:32:22.762 verify_state_save=0 00:32:22.762 do_verify=1 00:32:22.762 verify=crc32c-intel 00:32:22.762 [job0] 00:32:22.762 filename=/dev/nvme0n1 00:32:22.762 [job1] 00:32:22.762 filename=/dev/nvme0n2 00:32:22.762 [job2] 00:32:22.762 filename=/dev/nvme0n3 00:32:22.762 [job3] 00:32:22.762 filename=/dev/nvme0n4 00:32:22.762 Could not set queue depth (nvme0n1) 00:32:22.762 Could not set queue depth (nvme0n2) 00:32:22.762 Could not set queue depth (nvme0n3) 00:32:22.762 Could not set queue depth (nvme0n4) 00:32:23.034 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:23.034 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:23.034 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:23.034 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:23.034 fio-3.35 00:32:23.034 Starting 4 threads 00:32:24.410 00:32:24.410 job0: (groupid=0, jobs=1): err= 0: pid=1589461: Thu Nov 28 08:31:06 2024 00:32:24.410 read: IOPS=4211, BW=16.5MiB/s (17.2MB/s)(16.5MiB/1003msec) 00:32:24.410 slat (nsec): min=1055, max=10584k, avg=124513.97, stdev=713903.35 00:32:24.410 clat (usec): min=529, max=38542, avg=15194.50, stdev=7093.10 00:32:24.410 lat (usec): min=3394, max=38546, avg=15319.01, stdev=7117.94 00:32:24.410 clat percentiles (usec): 00:32:24.410 | 1.00th=[ 6390], 5.00th=[10290], 10.00th=[11731], 20.00th=[12387], 00:32:24.410 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:32:24.410 | 70.00th=[13435], 80.00th=[14484], 90.00th=[26084], 95.00th=[34866], 00:32:24.410 | 99.00th=[38011], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:32:24.410 | 99.99th=[38536] 00:32:24.410 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:32:24.410 slat (nsec): min=1756, max=9478.0k, avg=99157.62, stdev=453743.43 00:32:24.410 clat (usec): min=8487, max=33897, avg=13563.47, stdev=4156.93 00:32:24.410 lat (usec): min=8491, max=33906, avg=13662.63, stdev=4157.34 00:32:24.410 clat percentiles (usec): 00:32:24.410 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[11338], 20.00th=[11863], 00:32:24.410 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:32:24.410 | 70.00th=[12911], 80.00th=[13960], 90.00th=[15401], 95.00th=[25297], 00:32:24.410 | 99.00th=[31065], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:32:24.410 | 99.99th=[33817] 00:32:24.410 bw ( KiB/s): min=16351, max=20480, per=23.67%, avg=18415.50, stdev=2919.64, samples=2 00:32:24.410 iops : min= 4087, max= 5120, avg=4603.50, stdev=730.44, samples=2 00:32:24.410 lat (usec) : 750=0.01% 00:32:24.410 lat (msec) : 4=0.36%, 10=2.77%, 20=87.40%, 50=9.45% 00:32:24.411 cpu : usr=2.40%, sys=3.49%, ctx=572, majf=0, minf=1 00:32:24.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:24.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:24.411 issued rwts: total=4224,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:24.411 job1: (groupid=0, jobs=1): err= 0: pid=1589462: Thu Nov 28 08:31:06 2024 00:32:24.411 read: IOPS=4825, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1005msec) 00:32:24.411 slat (nsec): min=1224, max=10760k, avg=97389.26, stdev=753106.75 00:32:24.411 clat (usec): min=1052, max=72019, avg=12998.19, stdev=3397.64 00:32:24.411 lat (usec): min=5815, max=72022, avg=13095.58, stdev=3452.46 00:32:24.411 clat percentiles (usec): 00:32:24.411 | 1.00th=[ 6587], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[10945], 00:32:24.411 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12387], 00:32:24.411 | 70.00th=[14091], 80.00th=[15664], 90.00th=[17171], 95.00th=[18482], 00:32:24.411 | 99.00th=[21627], 99.50th=[21890], 99.90th=[62653], 99.95th=[63177], 00:32:24.411 | 99.99th=[71828] 00:32:24.411 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:32:24.411 slat (nsec): min=1877, max=10329k, avg=86800.61, stdev=579072.82 00:32:24.411 clat (usec): min=3935, max=30100, avg=12543.56, stdev=4393.93 00:32:24.411 lat (usec): min=3945, max=30105, avg=12630.36, stdev=4434.51 00:32:24.411 clat percentiles (usec): 00:32:24.411 | 1.00th=[ 5932], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[10290], 00:32:24.411 | 30.00th=[11207], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:32:24.411 | 70.00th=[12518], 80.00th=[13304], 90.00th=[16057], 95.00th=[23462], 00:32:24.411 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:32:24.411 | 99.99th=[30016] 00:32:24.411 bw ( KiB/s): min=20439, max=20480, per=26.30%, avg=20459.50, stdev=28.99, samples=2 00:32:24.411 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:32:24.411 lat (msec) : 2=0.01%, 4=0.03%, 10=12.94%, 20=82.81%, 50=4.15% 00:32:24.411 lat (msec) : 100=0.06% 00:32:24.411 cpu : usr=3.88%, sys=5.78%, ctx=381, majf=0, minf=1 00:32:24.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:24.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:24.411 issued rwts: total=4850,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:24.411 job2: (groupid=0, jobs=1): err= 0: pid=1589463: Thu Nov 28 08:31:06 2024 00:32:24.411 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:32:24.411 slat (nsec): min=1620, max=6614.4k, avg=98390.76, stdev=649814.13 00:32:24.411 clat (usec): min=7994, max=24825, avg=13382.98, stdev=2346.58 00:32:24.411 lat (usec): min=7999, max=24832, avg=13481.37, stdev=2382.21 00:32:24.411 clat percentiles (usec): 00:32:24.411 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11731], 00:32:24.411 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13566], 00:32:24.411 | 70.00th=[13829], 80.00th=[15270], 90.00th=[16909], 95.00th=[17957], 00:32:24.411 | 99.00th=[19268], 99.50th=[20579], 99.90th=[22152], 99.95th=[23725], 00:32:24.411 | 99.99th=[24773] 00:32:24.411 write: IOPS=5037, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1005msec); 0 zone resets 00:32:24.411 slat (usec): min=2, max=11016, avg=101.24, stdev=642.90 00:32:24.411 clat (usec): min=1481, max=34204, avg=13004.47, stdev=2410.11 00:32:24.411 lat (usec): min=1507, max=34207, avg=13105.71, stdev=2464.87 00:32:24.411 clat percentiles (usec): 00:32:24.411 | 1.00th=[ 5145], 5.00th=[ 9372], 10.00th=[11076], 20.00th=[11863], 00:32:24.411 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:32:24.411 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14222], 95.00th=[16319], 00:32:24.411 | 99.00th=[20055], 99.50th=[22676], 99.90th=[30278], 99.95th=[34341], 00:32:24.411 | 99.99th=[34341] 00:32:24.411 bw ( KiB/s): min=19008, max=20480, per=25.38%, avg=19744.00, stdev=1040.86, samples=2 00:32:24.411 iops : min= 4752, max= 5120, avg=4936.00, stdev=260.22, samples=2 00:32:24.411 lat (msec) : 2=0.09%, 4=0.25%, 10=5.39%, 20=93.43%, 50=0.84% 00:32:24.411 cpu : usr=3.29%, sys=7.17%, ctx=348, majf=0, minf=1 00:32:24.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:24.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:24.411 issued rwts: total=4608,5063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:24.411 job3: (groupid=0, jobs=1): err= 0: pid=1589464: Thu Nov 28 08:31:06 2024 00:32:24.411 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:32:24.411 slat (nsec): min=1635, max=6359.4k, avg=101204.03, stdev=651002.33 00:32:24.411 clat (usec): min=8086, max=20072, avg=13460.58, stdev=2202.83 00:32:24.411 lat (usec): min=8092, max=20407, avg=13561.79, stdev=2231.11 00:32:24.411 clat percentiles (usec): 00:32:24.411 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[10552], 20.00th=[11731], 00:32:24.411 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13304], 60.00th=[13698], 00:32:24.411 | 70.00th=[14484], 80.00th=[15270], 90.00th=[16581], 95.00th=[17695], 00:32:24.411 | 99.00th=[19006], 99.50th=[19006], 99.90th=[20055], 99.95th=[20055], 00:32:24.411 | 99.99th=[20055] 00:32:24.411 write: IOPS=4738, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1003msec); 0 zone resets 00:32:24.411 slat (usec): min=2, max=16852, avg=105.75, stdev=720.14 00:32:24.411 clat (usec): min=867, max=37609, avg=13710.10, stdev=3486.70 00:32:24.411 lat (usec): min=1494, max=37616, avg=13815.84, stdev=3540.62 00:32:24.411 clat percentiles (usec): 00:32:24.411 | 1.00th=[ 7373], 5.00th=[10945], 10.00th=[11469], 20.00th=[11994], 00:32:24.411 | 30.00th=[12387], 40.00th=[13173], 50.00th=[13566], 60.00th=[13829], 00:32:24.411 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14615], 95.00th=[16909], 00:32:24.411 | 99.00th=[33817], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:32:24.411 | 99.99th=[37487] 00:32:24.411 bw ( KiB/s): min=16528, max=20480, per=23.79%, avg=18504.00, stdev=2794.49, samples=2 00:32:24.411 iops : min= 4132, max= 5120, avg=4626.00, stdev=698.62, samples=2 00:32:24.411 lat (usec) : 1000=0.01% 00:32:24.411 lat (msec) : 2=0.02%, 10=3.65%, 20=94.78%, 50=1.54% 00:32:24.411 cpu : usr=4.49%, sys=6.19%, ctx=329, majf=0, minf=2 00:32:24.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:24.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:24.411 issued rwts: total=4608,4753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:24.411 00:32:24.411 Run status group 0 (all jobs): 00:32:24.411 READ: bw=71.1MiB/s (74.5MB/s), 16.5MiB/s-18.9MiB/s (17.2MB/s-19.8MB/s), io=71.4MiB (74.9MB), run=1003-1005msec 00:32:24.411 WRITE: bw=76.0MiB/s (79.7MB/s), 17.9MiB/s-19.9MiB/s (18.8MB/s-20.9MB/s), io=76.3MiB (80.1MB), run=1003-1005msec 00:32:24.411 00:32:24.411 Disk stats (read/write): 00:32:24.411 nvme0n1: ios=3634/3727, merge=0/0, ticks=14592/11889, in_queue=26481, util=86.97% 00:32:24.411 nvme0n2: ios=4116/4186, merge=0/0, ticks=40785/38010, in_queue=78795, util=87.01% 00:32:24.411 nvme0n3: ios=4094/4104, merge=0/0, ticks=26502/27133, in_queue=53635, util=89.05% 00:32:24.411 nvme0n4: ios=3877/4096, merge=0/0, ticks=25319/27607, in_queue=52926, util=89.71% 00:32:24.411 08:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:24.411 08:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1589692 00:32:24.411 08:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:24.411 08:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:24.411 [global] 00:32:24.411 thread=1 00:32:24.411 invalidate=1 00:32:24.411 rw=read 00:32:24.411 time_based=1 00:32:24.411 runtime=10 00:32:24.411 ioengine=libaio 00:32:24.411 direct=1 00:32:24.411 bs=4096 00:32:24.411 iodepth=1 00:32:24.411 norandommap=1 00:32:24.411 numjobs=1 00:32:24.411 00:32:24.411 [job0] 00:32:24.411 filename=/dev/nvme0n1 00:32:24.411 [job1] 00:32:24.411 filename=/dev/nvme0n2 00:32:24.411 [job2] 00:32:24.411 filename=/dev/nvme0n3 00:32:24.411 [job3] 00:32:24.411 filename=/dev/nvme0n4 00:32:24.411 Could not set queue depth (nvme0n1) 00:32:24.411 Could not set queue depth (nvme0n2) 00:32:24.411 Could not set queue depth (nvme0n3) 00:32:24.411 Could not set queue depth (nvme0n4) 00:32:24.411 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:24.411 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:24.411 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:24.411 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:24.411 fio-3.35 00:32:24.411 Starting 4 threads 00:32:27.696 08:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:27.696 08:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:27.696 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=7114752, buflen=4096 00:32:27.696 fio: pid=1589844, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:27.696 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=630784, buflen=4096 00:32:27.696 fio: pid=1589843, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:27.696 08:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:27.696 08:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:27.696 08:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:27.696 08:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:27.955 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=339968, buflen=4096 00:32:27.955 fio: pid=1589841, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:27.955 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=21831680, buflen=4096 00:32:27.955 fio: pid=1589842, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:27.955 08:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:27.955 08:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:27.955 00:32:27.955 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1589841: Thu Nov 28 08:31:10 2024 00:32:27.955 read: IOPS=26, BW=105KiB/s (108kB/s)(332KiB/3158msec) 00:32:27.955 slat (usec): min=9, max=7830, avg=115.56, stdev=851.89 00:32:27.955 clat (usec): min=287, max=42010, avg=37669.76, stdev=11340.85 00:32:27.955 lat (usec): min=312, max=48948, avg=37786.45, stdev=11402.00 00:32:27.955 clat percentiles (usec): 00:32:27.955 | 1.00th=[ 289], 5.00th=[ 424], 10.00th=[40633], 20.00th=[41157], 00:32:27.955 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:27.955 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:32:27.955 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:27.955 | 99.99th=[42206] 00:32:27.955 bw ( KiB/s): min= 96, max= 136, per=1.22%, avg=106.00, stdev=15.13, samples=6 00:32:27.955 iops : min= 24, max= 34, avg=26.50, stdev= 3.78, samples=6 00:32:27.955 lat (usec) : 500=5.95%, 750=1.19% 00:32:27.955 lat (msec) : 2=1.19%, 50=90.48% 00:32:27.955 cpu : usr=0.13%, sys=0.00%, ctx=85, majf=0, minf=1 00:32:27.955 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.955 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.955 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.955 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:27.955 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1589842: Thu Nov 28 08:31:10 2024 00:32:27.955 read: IOPS=1590, BW=6360KiB/s (6513kB/s)(20.8MiB/3352msec) 00:32:27.955 slat (usec): min=6, max=14018, avg=20.06, stdev=342.46 00:32:27.955 clat (usec): min=191, max=42089, avg=602.53, stdev=3779.28 00:32:27.955 lat (usec): min=206, max=42099, avg=622.60, stdev=3795.16 00:32:27.955 clat percentiles (usec): 00:32:27.955 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 235], 00:32:27.955 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:32:27.955 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:32:27.955 | 99.00th=[ 1369], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:32:27.955 | 99.99th=[42206] 00:32:27.955 bw ( KiB/s): min= 96, max=15140, per=65.34%, avg=5695.33, stdev=6491.09, samples=6 00:32:27.955 iops : min= 24, max= 3785, avg=1423.83, stdev=1622.77, samples=6 00:32:27.955 lat (usec) : 250=68.41%, 500=30.20%, 750=0.26%, 1000=0.02% 00:32:27.955 lat (msec) : 2=0.19%, 10=0.02%, 20=0.02%, 50=0.86% 00:32:27.955 cpu : usr=1.07%, sys=2.48%, ctx=5339, majf=0, minf=2 00:32:27.955 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.955 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.955 issued rwts: total=5331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.955 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:27.955 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1589843: Thu Nov 28 08:31:10 2024 00:32:27.955 read: IOPS=53, BW=211KiB/s (216kB/s)(616KiB/2922msec) 00:32:27.955 slat (nsec): min=7591, max=37296, avg=15304.83, stdev=7550.03 00:32:27.955 clat (usec): min=275, max=42004, avg=18817.69, stdev=20349.98 00:32:27.955 lat (usec): min=283, max=42027, avg=18832.95, stdev=20356.98 00:32:27.955 clat percentiles (usec): 00:32:27.955 | 1.00th=[ 281], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 00:32:27.955 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[41157], 00:32:27.956 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:27.956 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:27.956 | 99.99th=[42206] 00:32:27.956 bw ( KiB/s): min= 96, max= 760, per=2.64%, avg=230.40, stdev=296.08, samples=5 00:32:27.956 iops : min= 24, max= 190, avg=57.60, stdev=74.02, samples=5 00:32:27.956 lat (usec) : 500=52.90%, 750=1.29% 00:32:27.956 lat (msec) : 50=45.16% 00:32:27.956 cpu : usr=0.14%, sys=0.00%, ctx=155, majf=0, minf=2 00:32:27.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.956 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.956 issued rwts: total=155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:27.956 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1589844: Thu Nov 28 08:31:10 2024 00:32:27.956 read: IOPS=633, BW=2532KiB/s (2593kB/s)(6948KiB/2744msec) 00:32:27.956 slat (nsec): min=6609, max=31126, avg=7973.28, stdev=3041.13 00:32:27.956 clat (usec): min=221, max=42002, avg=1565.35, stdev=7136.07 00:32:27.956 lat (usec): min=228, max=42025, avg=1573.32, stdev=7138.70 00:32:27.956 clat percentiles (usec): 00:32:27.956 | 1.00th=[ 262], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 269], 00:32:27.956 | 30.00th=[ 273], 40.00th=[ 273], 50.00th=[ 273], 60.00th=[ 273], 00:32:27.956 | 70.00th=[ 277], 80.00th=[ 277], 90.00th=[ 281], 95.00th=[ 310], 00:32:27.956 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:27.956 | 99.99th=[42206] 00:32:27.956 bw ( KiB/s): min= 96, max= 9800, per=31.77%, avg=2769.60, stdev=4236.55, samples=5 00:32:27.956 iops : min= 24, max= 2450, avg=692.40, stdev=1059.14, samples=5 00:32:27.956 lat (usec) : 250=0.81%, 500=95.86%, 750=0.12% 00:32:27.956 lat (msec) : 50=3.16% 00:32:27.956 cpu : usr=0.15%, sys=0.62%, ctx=1738, majf=0, minf=2 00:32:27.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.956 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.956 issued rwts: total=1738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:27.956 00:32:27.956 Run status group 0 (all jobs): 00:32:27.956 READ: bw=8716KiB/s (8925kB/s), 105KiB/s-6360KiB/s (108kB/s-6513kB/s), io=28.5MiB (29.9MB), run=2744-3352msec 00:32:27.956 00:32:27.956 Disk stats (read/write): 00:32:27.956 nvme0n1: ios=82/0, merge=0/0, ticks=3088/0, in_queue=3088, util=95.50% 00:32:27.956 nvme0n2: ios=4674/0, merge=0/0, ticks=3505/0, in_queue=3505, util=99.54% 00:32:27.956 nvme0n3: ios=152/0, merge=0/0, ticks=2818/0, in_queue=2818, util=96.55% 00:32:27.956 nvme0n4: ios=1733/0, merge=0/0, ticks=2555/0, in_queue=2555, util=96.48% 00:32:28.214 08:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:28.214 08:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:28.473 08:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:28.473 08:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:28.732 08:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:28.732 08:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:28.732 08:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:28.732 08:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:28.991 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:28.991 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1589692 00:32:28.991 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:28.991 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:29.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:29.249 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:29.249 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:29.249 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:29.249 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:29.249 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:29.249 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:29.249 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:29.249 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:29.249 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:29.249 nvmf hotplug test: fio failed as expected 00:32:29.249 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.508 rmmod nvme_tcp 00:32:29.508 rmmod nvme_fabrics 00:32:29.508 rmmod nvme_keyring 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1587229 ']' 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1587229 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1587229 ']' 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1587229 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1587229 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1587229' 00:32:29.508 killing process with pid 1587229 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1587229 00:32:29.508 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1587229 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:29.768 08:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.674 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:31.674 00:32:31.674 real 0m25.275s 00:32:31.674 user 1m31.015s 00:32:31.674 sys 0m10.297s 00:32:31.674 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.674 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.674 ************************************ 00:32:31.674 END TEST nvmf_fio_target 00:32:31.674 ************************************ 00:32:31.934 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:31.934 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:31.934 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:31.934 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:31.934 ************************************ 00:32:31.934 START TEST nvmf_bdevio 00:32:31.934 ************************************ 00:32:31.934 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:31.934 * Looking for test storage... 00:32:31.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:31.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.934 --rc genhtml_branch_coverage=1 00:32:31.934 --rc genhtml_function_coverage=1 00:32:31.934 --rc genhtml_legend=1 00:32:31.934 --rc geninfo_all_blocks=1 00:32:31.934 --rc geninfo_unexecuted_blocks=1 00:32:31.934 00:32:31.934 ' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:31.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.934 --rc genhtml_branch_coverage=1 00:32:31.934 --rc genhtml_function_coverage=1 00:32:31.934 --rc genhtml_legend=1 00:32:31.934 --rc geninfo_all_blocks=1 00:32:31.934 --rc geninfo_unexecuted_blocks=1 00:32:31.934 00:32:31.934 ' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:31.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.934 --rc genhtml_branch_coverage=1 00:32:31.934 --rc genhtml_function_coverage=1 00:32:31.934 --rc genhtml_legend=1 00:32:31.934 --rc geninfo_all_blocks=1 00:32:31.934 --rc geninfo_unexecuted_blocks=1 00:32:31.934 00:32:31.934 ' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:31.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.934 --rc genhtml_branch_coverage=1 00:32:31.934 --rc genhtml_function_coverage=1 00:32:31.934 --rc genhtml_legend=1 00:32:31.934 --rc geninfo_all_blocks=1 00:32:31.934 --rc geninfo_unexecuted_blocks=1 00:32:31.934 00:32:31.934 ' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:31.934 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.207 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:37.208 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:37.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:37.208 Found net devices under 0000:86:00.0: cvl_0_0 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:37.208 Found net devices under 0000:86:00.1: cvl_0_1 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:37.208 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:37.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:37.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:32:37.208 00:32:37.208 --- 10.0.0.2 ping statistics --- 00:32:37.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.208 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:37.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:37.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:32:37.208 00:32:37.208 --- 10.0.0.1 ping statistics --- 00:32:37.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.208 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1594068 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1594068 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1594068 ']' 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:37.208 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:37.208 [2024-11-28 08:31:19.172449] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:37.208 [2024-11-28 08:31:19.173378] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:32:37.208 [2024-11-28 08:31:19.173412] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.208 [2024-11-28 08:31:19.242365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:37.209 [2024-11-28 08:31:19.284914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:37.209 [2024-11-28 08:31:19.284955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:37.209 [2024-11-28 08:31:19.284962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:37.209 [2024-11-28 08:31:19.284968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:37.209 [2024-11-28 08:31:19.284974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:37.209 [2024-11-28 08:31:19.286591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:37.209 [2024-11-28 08:31:19.286701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:37.209 [2024-11-28 08:31:19.286717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:37.209 [2024-11-28 08:31:19.286722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:37.209 [2024-11-28 08:31:19.354574] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:37.209 [2024-11-28 08:31:19.355156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:37.209 [2024-11-28 08:31:19.355158] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:37.209 [2024-11-28 08:31:19.355347] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:37.209 [2024-11-28 08:31:19.355464] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:37.209 [2024-11-28 08:31:19.427543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:37.209 Malloc0 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.209 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:37.468 [2024-11-28 08:31:19.491541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:37.468 { 00:32:37.468 "params": { 00:32:37.468 "name": "Nvme$subsystem", 00:32:37.468 "trtype": "$TEST_TRANSPORT", 00:32:37.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:37.468 "adrfam": "ipv4", 00:32:37.468 "trsvcid": "$NVMF_PORT", 00:32:37.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:37.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:37.468 "hdgst": ${hdgst:-false}, 00:32:37.468 "ddgst": ${ddgst:-false} 00:32:37.468 }, 00:32:37.468 "method": "bdev_nvme_attach_controller" 00:32:37.468 } 00:32:37.468 EOF 00:32:37.468 )") 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:37.468 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:37.468 "params": { 00:32:37.468 "name": "Nvme1", 00:32:37.468 "trtype": "tcp", 00:32:37.468 "traddr": "10.0.0.2", 00:32:37.468 "adrfam": "ipv4", 00:32:37.468 "trsvcid": "4420", 00:32:37.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:37.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:37.468 "hdgst": false, 00:32:37.468 "ddgst": false 00:32:37.468 }, 00:32:37.468 "method": "bdev_nvme_attach_controller" 00:32:37.468 }' 00:32:37.468 [2024-11-28 08:31:19.539306] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:32:37.468 [2024-11-28 08:31:19.539348] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594092 ] 00:32:37.468 [2024-11-28 08:31:19.601529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:37.468 [2024-11-28 08:31:19.646643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.468 [2024-11-28 08:31:19.646739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:37.468 [2024-11-28 08:31:19.646742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.726 I/O targets: 00:32:37.726 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:37.726 00:32:37.726 00:32:37.726 CUnit - A unit testing framework for C - Version 2.1-3 00:32:37.726 http://cunit.sourceforge.net/ 00:32:37.727 00:32:37.727 00:32:37.727 Suite: bdevio tests on: Nvme1n1 00:32:37.985 Test: blockdev write read block ...passed 00:32:37.985 Test: blockdev write zeroes read block ...passed 00:32:37.985 Test: blockdev write zeroes read no split ...passed 00:32:37.985 Test: blockdev write zeroes read split ...passed 00:32:37.985 Test: blockdev write zeroes read split partial ...passed 00:32:37.985 Test: blockdev reset ...[2024-11-28 08:31:20.071066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:37.985 [2024-11-28 08:31:20.071139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978350 (9): Bad file descriptor 00:32:37.985 [2024-11-28 08:31:20.205965] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:37.985 passed 00:32:37.985 Test: blockdev write read 8 blocks ...passed 00:32:37.985 Test: blockdev write read size > 128k ...passed 00:32:37.985 Test: blockdev write read invalid size ...passed 00:32:38.243 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:38.243 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:38.243 Test: blockdev write read max offset ...passed 00:32:38.243 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:38.243 Test: blockdev writev readv 8 blocks ...passed 00:32:38.243 Test: blockdev writev readv 30 x 1block ...passed 00:32:38.243 Test: blockdev writev readv block ...passed 00:32:38.243 Test: blockdev writev readv size > 128k ...passed 00:32:38.243 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:38.243 Test: blockdev comparev and writev ...[2024-11-28 08:31:20.457978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:38.243 [2024-11-28 08:31:20.458008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.243 [2024-11-28 08:31:20.458026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:38.243 [2024-11-28 08:31:20.458034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:38.243 [2024-11-28 08:31:20.458347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:38.243 [2024-11-28 08:31:20.458358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:38.243 [2024-11-28 08:31:20.458370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:38.243 [2024-11-28 08:31:20.458377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:38.243 [2024-11-28 08:31:20.458663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:38.243 [2024-11-28 08:31:20.458674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:38.243 [2024-11-28 08:31:20.458686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:38.243 [2024-11-28 08:31:20.458693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:38.243 [2024-11-28 08:31:20.458987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:38.243 [2024-11-28 08:31:20.458999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:38.243 [2024-11-28 08:31:20.459010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:38.244 [2024-11-28 08:31:20.459018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:38.244 passed 00:32:38.502 Test: blockdev nvme passthru rw ...passed 00:32:38.502 Test: blockdev nvme passthru vendor specific ...[2024-11-28 08:31:20.541333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:38.502 [2024-11-28 08:31:20.541349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:38.502 [2024-11-28 08:31:20.541471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:38.502 [2024-11-28 08:31:20.541482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:38.502 [2024-11-28 08:31:20.541598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:38.502 [2024-11-28 08:31:20.541608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:38.502 [2024-11-28 08:31:20.541723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:38.502 [2024-11-28 08:31:20.541733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:38.502 passed 00:32:38.502 Test: blockdev nvme admin passthru ...passed 00:32:38.502 Test: blockdev copy ...passed 00:32:38.502 00:32:38.502 Run Summary: Type Total Ran Passed Failed Inactive 00:32:38.502 suites 1 1 n/a 0 0 00:32:38.502 tests 23 23 23 0 0 00:32:38.502 asserts 152 152 152 0 n/a 00:32:38.502 00:32:38.502 Elapsed time = 1.260 seconds 00:32:38.502 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:38.503 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.503 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:38.503 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.503 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:38.503 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:38.503 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:38.503 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:38.503 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.503 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:38.503 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.503 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.503 rmmod nvme_tcp 00:32:38.503 rmmod nvme_fabrics 00:32:38.761 rmmod nvme_keyring 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1594068 ']' 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1594068 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1594068 ']' 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1594068 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1594068 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1594068' 00:32:38.761 killing process with pid 1594068 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1594068 00:32:38.761 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1594068 00:32:38.761 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:38.761 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:38.761 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:38.761 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:39.020 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:39.020 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:39.020 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:39.020 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:39.020 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:39.020 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.020 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.020 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.924 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:40.924 00:32:40.924 real 0m9.117s 00:32:40.924 user 0m9.400s 00:32:40.924 sys 0m4.536s 00:32:40.924 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.924 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:40.924 ************************************ 00:32:40.924 END TEST nvmf_bdevio 00:32:40.924 ************************************ 00:32:40.924 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:40.924 00:32:40.924 real 4m24.503s 00:32:40.924 user 9m4.773s 00:32:40.924 sys 1m44.945s 00:32:40.924 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.924 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:40.924 ************************************ 00:32:40.924 END TEST nvmf_target_core_interrupt_mode 00:32:40.924 ************************************ 00:32:40.924 08:31:23 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:40.924 08:31:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:40.924 08:31:23 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.924 08:31:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:40.924 ************************************ 00:32:40.924 START TEST nvmf_interrupt 00:32:40.924 ************************************ 00:32:40.924 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:41.184 * Looking for test storage... 00:32:41.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:41.184 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:41.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.185 --rc genhtml_branch_coverage=1 00:32:41.185 --rc genhtml_function_coverage=1 00:32:41.185 --rc genhtml_legend=1 00:32:41.185 --rc geninfo_all_blocks=1 00:32:41.185 --rc geninfo_unexecuted_blocks=1 00:32:41.185 00:32:41.185 ' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:41.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.185 --rc genhtml_branch_coverage=1 00:32:41.185 --rc genhtml_function_coverage=1 00:32:41.185 --rc genhtml_legend=1 00:32:41.185 --rc geninfo_all_blocks=1 00:32:41.185 --rc geninfo_unexecuted_blocks=1 00:32:41.185 00:32:41.185 ' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:41.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.185 --rc genhtml_branch_coverage=1 00:32:41.185 --rc genhtml_function_coverage=1 00:32:41.185 --rc genhtml_legend=1 00:32:41.185 --rc geninfo_all_blocks=1 00:32:41.185 --rc geninfo_unexecuted_blocks=1 00:32:41.185 00:32:41.185 ' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:41.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.185 --rc genhtml_branch_coverage=1 00:32:41.185 --rc genhtml_function_coverage=1 00:32:41.185 --rc genhtml_legend=1 00:32:41.185 --rc geninfo_all_blocks=1 00:32:41.185 --rc geninfo_unexecuted_blocks=1 00:32:41.185 00:32:41.185 ' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.185 08:31:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:47.753 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:47.753 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:47.753 Found net devices under 0000:86:00.0: cvl_0_0 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:47.753 Found net devices under 0000:86:00.1: cvl_0_1 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.753 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.754 08:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:47.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:32:47.754 00:32:47.754 --- 10.0.0.2 ping statistics --- 00:32:47.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.754 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:32:47.754 00:32:47.754 --- 10.0.0.1 ping statistics --- 00:32:47.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.754 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1597847 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1597847 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1597847 ']' 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.754 [2024-11-28 08:31:29.203156] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:47.754 [2024-11-28 08:31:29.204131] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:32:47.754 [2024-11-28 08:31:29.204172] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.754 [2024-11-28 08:31:29.271241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:47.754 [2024-11-28 08:31:29.313874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.754 [2024-11-28 08:31:29.313907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.754 [2024-11-28 08:31:29.313915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.754 [2024-11-28 08:31:29.313921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.754 [2024-11-28 08:31:29.313926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.754 [2024-11-28 08:31:29.315072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.754 [2024-11-28 08:31:29.315075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.754 [2024-11-28 08:31:29.383897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:47.754 [2024-11-28 08:31:29.384128] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:47.754 [2024-11-28 08:31:29.384191] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:47.754 5000+0 records in 00:32:47.754 5000+0 records out 00:32:47.754 10240000 bytes (10 MB, 9.8 MiB) copied, 0.00757136 s, 1.4 GB/s 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.754 AIO0 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.754 [2024-11-28 08:31:29.487804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.754 [2024-11-28 08:31:29.519727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1597847 0 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1597847 0 idle 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1597847 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1597847 -w 256 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1597847 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0' 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1597847 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1597847 1 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1597847 1 idle 00:32:47.754 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1597847 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1597847 -w 256 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1597856 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1597856 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1597919 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1597847 0 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1597847 0 busy 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1597847 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:47.755 08:31:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1597847 -w 256 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1597847 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.38 reactor_0' 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1597847 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.38 reactor_0 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1597847 1 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1597847 1 busy 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1597847 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1597847 -w 256 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1597856 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.25 reactor_1' 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1597856 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.25 reactor_1 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:48.014 08:31:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1597919 00:32:57.991 Initializing NVMe Controllers 00:32:57.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:57.991 Controller IO queue size 256, less than required. 00:32:57.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:57.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:57.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:57.991 Initialization complete. Launching workers. 00:32:57.991 ======================================================== 00:32:57.991 Latency(us) 00:32:57.991 Device Information : IOPS MiB/s Average min max 00:32:57.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16082.40 62.82 15926.54 2762.76 19875.20 00:32:57.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15913.50 62.16 16095.79 4220.45 19558.77 00:32:57.992 ======================================================== 00:32:57.992 Total : 31995.89 124.98 16010.72 2762.76 19875.20 00:32:57.992 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1597847 0 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1597847 0 idle 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1597847 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1597847 -w 256 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1597847 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.21 reactor_0' 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1597847 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.21 reactor_0 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1597847 1 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1597847 1 idle 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1597847 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1597847 -w 256 00:32:57.992 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:58.252 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1597856 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:58.252 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1597856 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:58.252 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:58.252 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:58.252 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:58.252 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:58.252 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:58.252 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:58.252 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:58.252 08:31:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:58.252 08:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:58.511 08:31:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:58.511 08:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:58.511 08:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:58.511 08:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:58.511 08:31:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1597847 0 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1597847 0 idle 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1597847 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1597847 -w 256 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1597847 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.42 reactor_0' 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1597847 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.42 reactor_0 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1597847 1 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1597847 1 idle 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1597847 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1597847 -w 256 00:33:01.046 08:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1597856 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.07 reactor_1' 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1597856 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.07 reactor_1 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:01.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:01.046 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:01.305 rmmod nvme_tcp 00:33:01.305 rmmod nvme_fabrics 00:33:01.305 rmmod nvme_keyring 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1597847 ']' 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1597847 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1597847 ']' 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1597847 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1597847 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1597847' 00:33:01.305 killing process with pid 1597847 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1597847 00:33:01.305 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1597847 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:01.564 08:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.469 08:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:03.469 00:33:03.469 real 0m22.515s 00:33:03.469 user 0m39.548s 00:33:03.469 sys 0m8.212s 00:33:03.469 08:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.469 08:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:03.469 ************************************ 00:33:03.469 END TEST nvmf_interrupt 00:33:03.469 ************************************ 00:33:03.728 00:33:03.728 real 26m40.995s 00:33:03.728 user 56m5.651s 00:33:03.728 sys 8m54.768s 00:33:03.728 08:31:45 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.728 08:31:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.728 ************************************ 00:33:03.728 END TEST nvmf_tcp 00:33:03.728 ************************************ 00:33:03.728 08:31:45 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:03.728 08:31:45 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:03.728 08:31:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:03.728 08:31:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:03.728 08:31:45 -- common/autotest_common.sh@10 -- # set +x 00:33:03.728 ************************************ 00:33:03.728 START TEST spdkcli_nvmf_tcp 00:33:03.728 ************************************ 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:03.728 * Looking for test storage... 00:33:03.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:03.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.728 --rc genhtml_branch_coverage=1 00:33:03.728 --rc genhtml_function_coverage=1 00:33:03.728 --rc genhtml_legend=1 00:33:03.728 --rc geninfo_all_blocks=1 00:33:03.728 --rc geninfo_unexecuted_blocks=1 00:33:03.728 00:33:03.728 ' 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:03.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.728 --rc genhtml_branch_coverage=1 00:33:03.728 --rc genhtml_function_coverage=1 00:33:03.728 --rc genhtml_legend=1 00:33:03.728 --rc geninfo_all_blocks=1 00:33:03.728 --rc geninfo_unexecuted_blocks=1 00:33:03.728 00:33:03.728 ' 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:03.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.728 --rc genhtml_branch_coverage=1 00:33:03.728 --rc genhtml_function_coverage=1 00:33:03.728 --rc genhtml_legend=1 00:33:03.728 --rc geninfo_all_blocks=1 00:33:03.728 --rc geninfo_unexecuted_blocks=1 00:33:03.728 00:33:03.728 ' 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:03.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.728 --rc genhtml_branch_coverage=1 00:33:03.728 --rc genhtml_function_coverage=1 00:33:03.728 --rc genhtml_legend=1 00:33:03.728 --rc geninfo_all_blocks=1 00:33:03.728 --rc geninfo_unexecuted_blocks=1 00:33:03.728 00:33:03.728 ' 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:03.728 08:31:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.729 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.989 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:03.989 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:03.989 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.989 08:31:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:03.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1600763 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1600763 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1600763 ']' 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:03.989 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.989 [2024-11-28 08:31:46.068448] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:33:03.989 [2024-11-28 08:31:46.068500] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600763 ] 00:33:03.989 [2024-11-28 08:31:46.129471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:03.989 [2024-11-28 08:31:46.173209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.989 [2024-11-28 08:31:46.173212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.250 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.250 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:04.250 08:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:04.250 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:04.250 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:04.250 08:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:04.250 08:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:04.250 08:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:04.250 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:04.250 08:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:04.250 08:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:04.250 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:04.250 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:04.250 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:04.250 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:04.250 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:04.250 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:04.250 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:04.250 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:04.250 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:04.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:04.250 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:04.250 ' 00:33:06.788 [2024-11-28 08:31:48.782537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.166 [2024-11-28 08:31:50.002654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:10.072 [2024-11-28 08:31:52.249614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:11.979 [2024-11-28 08:31:54.179582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:13.883 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:13.883 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:13.883 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:13.883 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:13.883 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:13.883 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:13.883 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:13.883 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:13.883 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:13.883 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:13.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:13.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:13.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:13.884 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:13.884 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:13.884 08:31:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:13.884 08:31:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.884 08:31:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.884 08:31:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:13.884 08:31:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:13.884 08:31:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.884 08:31:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:13.884 08:31:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:14.143 08:31:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:14.143 08:31:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:14.143 08:31:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:14.143 08:31:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:14.143 08:31:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:14.143 08:31:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:14.143 08:31:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.143 08:31:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:14.143 08:31:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:14.143 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:14.143 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:14.143 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:14.143 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:14.143 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:14.143 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:14.143 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:14.143 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:14.143 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:14.144 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:14.144 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:14.144 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:14.144 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:14.144 ' 00:33:19.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:19.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:19.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:19.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:19.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:19.421 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:19.421 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:19.421 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:19.421 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:19.421 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:19.421 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:19.421 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:19.421 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:19.421 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1600763 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1600763 ']' 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1600763 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1600763 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1600763' 00:33:19.421 killing process with pid 1600763 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1600763 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1600763 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1600763 ']' 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1600763 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1600763 ']' 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1600763 00:33:19.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1600763) - No such process 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1600763 is not found' 00:33:19.421 Process with pid 1600763 is not found 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:19.421 00:33:19.421 real 0m15.850s 00:33:19.421 user 0m33.027s 00:33:19.421 sys 0m0.648s 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:19.421 08:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:19.421 ************************************ 00:33:19.421 END TEST spdkcli_nvmf_tcp 00:33:19.421 ************************************ 00:33:19.681 08:32:01 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:19.681 08:32:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:19.681 08:32:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:19.681 08:32:01 -- common/autotest_common.sh@10 -- # set +x 00:33:19.682 ************************************ 00:33:19.682 START TEST nvmf_identify_passthru 00:33:19.682 ************************************ 00:33:19.682 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:19.682 * Looking for test storage... 00:33:19.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:19.682 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:19.682 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:33:19.682 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:19.682 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:19.682 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.682 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.682 --rc genhtml_branch_coverage=1 00:33:19.682 --rc genhtml_function_coverage=1 00:33:19.682 --rc genhtml_legend=1 00:33:19.682 --rc geninfo_all_blocks=1 00:33:19.682 --rc geninfo_unexecuted_blocks=1 00:33:19.682 00:33:19.682 ' 00:33:19.682 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.682 --rc genhtml_branch_coverage=1 00:33:19.682 --rc genhtml_function_coverage=1 00:33:19.682 --rc genhtml_legend=1 00:33:19.682 --rc geninfo_all_blocks=1 00:33:19.682 --rc geninfo_unexecuted_blocks=1 00:33:19.682 00:33:19.682 ' 00:33:19.682 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.682 --rc genhtml_branch_coverage=1 00:33:19.682 --rc genhtml_function_coverage=1 00:33:19.682 --rc genhtml_legend=1 00:33:19.682 --rc geninfo_all_blocks=1 00:33:19.682 --rc geninfo_unexecuted_blocks=1 00:33:19.682 00:33:19.682 ' 00:33:19.682 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.682 --rc genhtml_branch_coverage=1 00:33:19.682 --rc genhtml_function_coverage=1 00:33:19.682 --rc genhtml_legend=1 00:33:19.682 --rc geninfo_all_blocks=1 00:33:19.682 --rc geninfo_unexecuted_blocks=1 00:33:19.682 00:33:19.682 ' 00:33:19.682 08:32:01 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.682 08:32:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.682 08:32:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.682 08:32:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.682 08:32:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:19.682 08:32:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:19.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:19.682 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:19.682 08:32:01 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.682 08:32:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.682 08:32:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.683 08:32:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.683 08:32:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.683 08:32:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:19.683 08:32:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.683 08:32:01 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:19.683 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:19.683 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.683 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:19.683 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:19.683 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:19.683 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.683 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:19.683 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.683 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:19.683 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:19.683 08:32:01 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:19.683 08:32:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:24.962 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:24.962 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:24.962 Found net devices under 0000:86:00.0: cvl_0_0 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.962 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:24.963 Found net devices under 0000:86:00.1: cvl_0_1 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:24.963 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.222 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.222 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:33:25.223 00:33:25.223 --- 10.0.0.2 ping statistics --- 00:33:25.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.223 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:33:25.223 00:33:25.223 --- 10.0.0.1 ping statistics --- 00:33:25.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.223 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:25.223 08:32:07 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:25.223 08:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:25.223 08:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:25.223 08:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:25.223 08:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:25.223 08:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:25.223 08:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:25.223 08:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:25.223 08:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:29.415 08:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:33:29.415 08:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:29.415 08:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:29.415 08:32:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:33.609 08:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:33.609 08:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:33.609 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:33.609 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.609 08:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:33.609 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.609 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.609 08:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:33.609 08:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1607622 00:33:33.609 08:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:33.609 08:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1607622 00:33:33.609 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1607622 ']' 00:33:33.609 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.609 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:33.609 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.609 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:33.609 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.609 [2024-11-28 08:32:15.733778] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:33:33.609 [2024-11-28 08:32:15.733824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.610 [2024-11-28 08:32:15.798893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:33.610 [2024-11-28 08:32:15.844428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.610 [2024-11-28 08:32:15.844466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.610 [2024-11-28 08:32:15.844473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.610 [2024-11-28 08:32:15.844480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.610 [2024-11-28 08:32:15.844484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.610 [2024-11-28 08:32:15.846021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.610 [2024-11-28 08:32:15.846123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:33.610 [2024-11-28 08:32:15.846210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:33.610 [2024-11-28 08:32:15.846212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.869 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:33.869 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:33.869 08:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:33.869 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.869 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.869 INFO: Log level set to 20 00:33:33.869 INFO: Requests: 00:33:33.869 { 00:33:33.869 "jsonrpc": "2.0", 00:33:33.869 "method": "nvmf_set_config", 00:33:33.869 "id": 1, 00:33:33.869 "params": { 00:33:33.869 "admin_cmd_passthru": { 00:33:33.869 "identify_ctrlr": true 00:33:33.869 } 00:33:33.869 } 00:33:33.869 } 00:33:33.869 00:33:33.869 INFO: response: 00:33:33.869 { 00:33:33.869 "jsonrpc": "2.0", 00:33:33.869 "id": 1, 00:33:33.869 "result": true 00:33:33.869 } 00:33:33.869 00:33:33.869 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.869 08:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:33.869 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.869 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.869 INFO: Setting log level to 20 00:33:33.869 INFO: Setting log level to 20 00:33:33.869 INFO: Log level set to 20 00:33:33.869 INFO: Log level set to 20 00:33:33.869 INFO: Requests: 00:33:33.869 { 00:33:33.869 "jsonrpc": "2.0", 00:33:33.869 "method": "framework_start_init", 00:33:33.869 "id": 1 00:33:33.869 } 00:33:33.869 00:33:33.869 INFO: Requests: 00:33:33.869 { 00:33:33.869 "jsonrpc": "2.0", 00:33:33.869 "method": "framework_start_init", 00:33:33.869 "id": 1 00:33:33.869 } 00:33:33.869 00:33:33.869 [2024-11-28 08:32:15.983068] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:33.869 INFO: response: 00:33:33.869 { 00:33:33.869 "jsonrpc": "2.0", 00:33:33.869 "id": 1, 00:33:33.869 "result": true 00:33:33.869 } 00:33:33.869 00:33:33.869 INFO: response: 00:33:33.869 { 00:33:33.869 "jsonrpc": "2.0", 00:33:33.869 "id": 1, 00:33:33.869 "result": true 00:33:33.869 } 00:33:33.869 00:33:33.869 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.869 08:32:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:33.869 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.869 08:32:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.869 INFO: Setting log level to 40 00:33:33.869 INFO: Setting log level to 40 00:33:33.869 INFO: Setting log level to 40 00:33:33.869 [2024-11-28 08:32:15.996425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.869 08:32:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.869 08:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:33.869 08:32:16 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:33.869 08:32:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.869 08:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:33.869 08:32:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.869 08:32:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.159 Nvme0n1 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.159 08:32:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.159 08:32:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.159 08:32:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.159 [2024-11-28 08:32:18.906881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.159 08:32:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.159 [ 00:33:37.159 { 00:33:37.159 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:37.159 "subtype": "Discovery", 00:33:37.159 "listen_addresses": [], 00:33:37.159 "allow_any_host": true, 00:33:37.159 "hosts": [] 00:33:37.159 }, 00:33:37.159 { 00:33:37.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:37.159 "subtype": "NVMe", 00:33:37.159 "listen_addresses": [ 00:33:37.159 { 00:33:37.159 "trtype": "TCP", 00:33:37.159 "adrfam": "IPv4", 00:33:37.159 "traddr": "10.0.0.2", 00:33:37.159 "trsvcid": "4420" 00:33:37.159 } 00:33:37.159 ], 00:33:37.159 "allow_any_host": true, 00:33:37.159 "hosts": [], 00:33:37.159 "serial_number": "SPDK00000000000001", 00:33:37.159 "model_number": "SPDK bdev Controller", 00:33:37.159 "max_namespaces": 1, 00:33:37.159 "min_cntlid": 1, 00:33:37.159 "max_cntlid": 65519, 00:33:37.159 "namespaces": [ 00:33:37.159 { 00:33:37.159 "nsid": 1, 00:33:37.159 "bdev_name": "Nvme0n1", 00:33:37.159 "name": "Nvme0n1", 00:33:37.159 "nguid": "E4EFBE749D344A5B85865310B9B2D870", 00:33:37.159 "uuid": "e4efbe74-9d34-4a5b-8586-5310b9b2d870" 00:33:37.159 } 00:33:37.159 ] 00:33:37.159 } 00:33:37.159 ] 00:33:37.159 08:32:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.159 08:32:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:37.159 08:32:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:37.159 08:32:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:37.159 08:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:33:37.159 08:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:37.159 08:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:37.159 08:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:37.159 08:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:37.159 08:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:33:37.159 08:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:37.159 08:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:37.159 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.159 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.159 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.159 08:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:37.159 08:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:37.159 08:32:19 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:37.159 08:32:19 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:37.159 08:32:19 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:37.159 08:32:19 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:37.159 08:32:19 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:37.159 08:32:19 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:37.159 rmmod nvme_tcp 00:33:37.159 rmmod nvme_fabrics 00:33:37.159 rmmod nvme_keyring 00:33:37.159 08:32:19 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.159 08:32:19 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:37.159 08:32:19 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:37.159 08:32:19 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1607622 ']' 00:33:37.159 08:32:19 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1607622 00:33:37.159 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1607622 ']' 00:33:37.159 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1607622 00:33:37.159 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:37.159 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.159 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1607622 00:33:37.418 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:37.418 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:37.418 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1607622' 00:33:37.418 killing process with pid 1607622 00:33:37.418 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1607622 00:33:37.418 08:32:19 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1607622 00:33:38.793 08:32:20 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:38.793 08:32:20 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:38.793 08:32:20 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:38.793 08:32:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:38.793 08:32:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:38.793 08:32:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:38.793 08:32:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:38.793 08:32:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.793 08:32:20 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.793 08:32:20 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.793 08:32:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:38.793 08:32:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.700 08:32:22 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:40.700 00:33:40.700 real 0m21.224s 00:33:40.700 user 0m26.584s 00:33:40.700 sys 0m5.795s 00:33:40.700 08:32:22 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:40.700 08:32:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:40.700 ************************************ 00:33:40.700 END TEST nvmf_identify_passthru 00:33:40.700 ************************************ 00:33:40.958 08:32:22 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:40.959 08:32:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:40.959 08:32:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.959 08:32:22 -- common/autotest_common.sh@10 -- # set +x 00:33:40.959 ************************************ 00:33:40.959 START TEST nvmf_dif 00:33:40.959 ************************************ 00:33:40.959 08:32:23 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:40.959 * Looking for test storage... 00:33:40.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:40.959 08:32:23 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:40.959 08:32:23 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:40.959 08:32:23 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:40.959 08:32:23 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:40.959 08:32:23 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:40.959 08:32:23 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:40.959 08:32:23 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:40.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.959 --rc genhtml_branch_coverage=1 00:33:40.959 --rc genhtml_function_coverage=1 00:33:40.959 --rc genhtml_legend=1 00:33:40.959 --rc geninfo_all_blocks=1 00:33:40.959 --rc geninfo_unexecuted_blocks=1 00:33:40.959 00:33:40.959 ' 00:33:40.959 08:32:23 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:40.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.959 --rc genhtml_branch_coverage=1 00:33:40.959 --rc genhtml_function_coverage=1 00:33:40.959 --rc genhtml_legend=1 00:33:40.959 --rc geninfo_all_blocks=1 00:33:40.959 --rc geninfo_unexecuted_blocks=1 00:33:40.959 00:33:40.959 ' 00:33:40.959 08:32:23 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:40.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.959 --rc genhtml_branch_coverage=1 00:33:40.959 --rc genhtml_function_coverage=1 00:33:40.959 --rc genhtml_legend=1 00:33:40.959 --rc geninfo_all_blocks=1 00:33:40.959 --rc geninfo_unexecuted_blocks=1 00:33:40.959 00:33:40.959 ' 00:33:40.959 08:32:23 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:40.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.959 --rc genhtml_branch_coverage=1 00:33:40.959 --rc genhtml_function_coverage=1 00:33:40.959 --rc genhtml_legend=1 00:33:40.959 --rc geninfo_all_blocks=1 00:33:40.959 --rc geninfo_unexecuted_blocks=1 00:33:40.959 00:33:40.959 ' 00:33:40.959 08:32:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:40.959 08:32:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:41.218 08:32:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.218 08:32:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.218 08:32:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.218 08:32:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.218 08:32:23 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.218 08:32:23 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.218 08:32:23 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.218 08:32:23 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.218 08:32:23 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.218 08:32:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.218 08:32:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.218 08:32:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.218 08:32:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:41.218 08:32:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.218 08:32:23 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:41.218 08:32:23 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:41.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:41.219 08:32:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:41.219 08:32:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:41.219 08:32:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:41.219 08:32:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:41.219 08:32:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.219 08:32:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:41.219 08:32:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:41.219 08:32:23 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:41.219 08:32:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:46.488 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:46.488 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:46.488 Found net devices under 0000:86:00.0: cvl_0_0 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:46.488 Found net devices under 0000:86:00.1: cvl_0_1 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.488 08:32:27 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:33:46.488 00:33:46.488 --- 10.0.0.2 ping statistics --- 00:33:46.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.488 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:33:46.488 00:33:46.488 --- 10.0.0.1 ping statistics --- 00:33:46.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.488 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:46.488 08:32:28 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:48.483 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:48.483 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:48.483 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:48.483 08:32:30 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.483 08:32:30 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:48.483 08:32:30 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:48.483 08:32:30 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.483 08:32:30 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:48.483 08:32:30 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:48.483 08:32:30 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:48.483 08:32:30 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:48.483 08:32:30 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:48.483 08:32:30 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:48.483 08:32:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.483 08:32:30 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1612995 00:33:48.483 08:32:30 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1612995 00:33:48.483 08:32:30 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1612995 ']' 00:33:48.483 08:32:30 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.483 08:32:30 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:48.483 08:32:30 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.483 08:32:30 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:48.483 08:32:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.483 08:32:30 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:48.483 [2024-11-28 08:32:30.546640] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:33:48.483 [2024-11-28 08:32:30.546687] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.483 [2024-11-28 08:32:30.613004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.483 [2024-11-28 08:32:30.656760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.483 [2024-11-28 08:32:30.656793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.483 [2024-11-28 08:32:30.656801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.483 [2024-11-28 08:32:30.656808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.483 [2024-11-28 08:32:30.656814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.483 [2024-11-28 08:32:30.657353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.761 08:32:30 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:48.761 08:32:30 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:48.761 08:32:30 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:48.761 08:32:30 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:48.761 08:32:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.761 08:32:30 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.761 08:32:30 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:48.761 08:32:30 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:48.761 08:32:30 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.761 08:32:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.761 [2024-11-28 08:32:30.793662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.761 08:32:30 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.761 08:32:30 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:48.761 08:32:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:48.761 08:32:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:48.761 08:32:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.761 ************************************ 00:33:48.761 START TEST fio_dif_1_default 00:33:48.761 ************************************ 00:33:48.761 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:48.761 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:48.761 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:48.761 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:48.761 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:48.761 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:48.761 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:48.761 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.761 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:48.761 bdev_null0 00:33:48.761 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:48.762 [2024-11-28 08:32:30.861971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:48.762 { 00:33:48.762 "params": { 00:33:48.762 "name": "Nvme$subsystem", 00:33:48.762 "trtype": "$TEST_TRANSPORT", 00:33:48.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:48.762 "adrfam": "ipv4", 00:33:48.762 "trsvcid": "$NVMF_PORT", 00:33:48.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:48.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:48.762 "hdgst": ${hdgst:-false}, 00:33:48.762 "ddgst": ${ddgst:-false} 00:33:48.762 }, 00:33:48.762 "method": "bdev_nvme_attach_controller" 00:33:48.762 } 00:33:48.762 EOF 00:33:48.762 )") 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:48.762 "params": { 00:33:48.762 "name": "Nvme0", 00:33:48.762 "trtype": "tcp", 00:33:48.762 "traddr": "10.0.0.2", 00:33:48.762 "adrfam": "ipv4", 00:33:48.762 "trsvcid": "4420", 00:33:48.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:48.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:48.762 "hdgst": false, 00:33:48.762 "ddgst": false 00:33:48.762 }, 00:33:48.762 "method": "bdev_nvme_attach_controller" 00:33:48.762 }' 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:48.762 08:32:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.061 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:49.061 fio-3.35 00:33:49.061 Starting 1 thread 00:34:01.274 00:34:01.274 filename0: (groupid=0, jobs=1): err= 0: pid=1613271: Thu Nov 28 08:32:41 2024 00:34:01.274 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:34:01.274 slat (nsec): min=5747, max=25927, avg=6563.60, stdev=1646.24 00:34:01.274 clat (usec): min=40779, max=46847, avg=41018.88, stdev=392.92 00:34:01.274 lat (usec): min=40786, max=46873, avg=41025.44, stdev=393.39 00:34:01.274 clat percentiles (usec): 00:34:01.274 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:01.274 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:01.274 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:01.274 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:34:01.274 | 99.99th=[46924] 00:34:01.274 bw ( KiB/s): min= 383, max= 416, per=99.52%, avg=388.75, stdev=11.75, samples=20 00:34:01.274 iops : min= 95, max= 104, avg=97.15, stdev= 2.96, samples=20 00:34:01.274 lat (msec) : 50=100.00% 00:34:01.274 cpu : usr=92.72%, sys=7.03%, ctx=7, majf=0, minf=0 00:34:01.274 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.274 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.274 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:01.274 00:34:01.274 Run status group 0 (all jobs): 00:34:01.274 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10014-10014msec 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.274 00:34:01.274 real 0m11.209s 00:34:01.274 user 0m15.747s 00:34:01.274 sys 0m0.976s 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:01.274 ************************************ 00:34:01.274 END TEST fio_dif_1_default 00:34:01.274 ************************************ 00:34:01.274 08:32:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:01.274 08:32:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:01.274 08:32:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.274 08:32:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.274 ************************************ 00:34:01.274 START TEST fio_dif_1_multi_subsystems 00:34:01.274 ************************************ 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.274 bdev_null0 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.274 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.275 [2024-11-28 08:32:42.137930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.275 bdev_null1 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:01.275 { 00:34:01.275 "params": { 00:34:01.275 "name": "Nvme$subsystem", 00:34:01.275 "trtype": "$TEST_TRANSPORT", 00:34:01.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.275 "adrfam": "ipv4", 00:34:01.275 "trsvcid": "$NVMF_PORT", 00:34:01.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.275 "hdgst": ${hdgst:-false}, 00:34:01.275 "ddgst": ${ddgst:-false} 00:34:01.275 }, 00:34:01.275 "method": "bdev_nvme_attach_controller" 00:34:01.275 } 00:34:01.275 EOF 00:34:01.275 )") 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:01.275 { 00:34:01.275 "params": { 00:34:01.275 "name": "Nvme$subsystem", 00:34:01.275 "trtype": "$TEST_TRANSPORT", 00:34:01.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.275 "adrfam": "ipv4", 00:34:01.275 "trsvcid": "$NVMF_PORT", 00:34:01.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.275 "hdgst": ${hdgst:-false}, 00:34:01.275 "ddgst": ${ddgst:-false} 00:34:01.275 }, 00:34:01.275 "method": "bdev_nvme_attach_controller" 00:34:01.275 } 00:34:01.275 EOF 00:34:01.275 )") 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:01.275 "params": { 00:34:01.275 "name": "Nvme0", 00:34:01.275 "trtype": "tcp", 00:34:01.275 "traddr": "10.0.0.2", 00:34:01.275 "adrfam": "ipv4", 00:34:01.275 "trsvcid": "4420", 00:34:01.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:01.275 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:01.275 "hdgst": false, 00:34:01.275 "ddgst": false 00:34:01.275 }, 00:34:01.275 "method": "bdev_nvme_attach_controller" 00:34:01.275 },{ 00:34:01.275 "params": { 00:34:01.275 "name": "Nvme1", 00:34:01.275 "trtype": "tcp", 00:34:01.275 "traddr": "10.0.0.2", 00:34:01.275 "adrfam": "ipv4", 00:34:01.275 "trsvcid": "4420", 00:34:01.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:01.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:01.275 "hdgst": false, 00:34:01.275 "ddgst": false 00:34:01.275 }, 00:34:01.275 "method": "bdev_nvme_attach_controller" 00:34:01.275 }' 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:01.275 08:32:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.275 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:01.275 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:01.275 fio-3.35 00:34:01.275 Starting 2 threads 00:34:11.261 00:34:11.261 filename0: (groupid=0, jobs=1): err= 0: pid=1615204: Thu Nov 28 08:32:53 2024 00:34:11.261 read: IOPS=182, BW=728KiB/s (746kB/s)(7296KiB/10019msec) 00:34:11.261 slat (nsec): min=5914, max=53402, avg=9485.00, stdev=7200.42 00:34:11.261 clat (usec): min=398, max=42563, avg=21940.18, stdev=20544.98 00:34:11.261 lat (usec): min=404, max=42571, avg=21949.67, stdev=20542.60 00:34:11.261 clat percentiles (usec): 00:34:11.261 | 1.00th=[ 437], 5.00th=[ 453], 10.00th=[ 461], 20.00th=[ 474], 00:34:11.261 | 30.00th=[ 482], 40.00th=[ 494], 50.00th=[41157], 60.00th=[41681], 00:34:11.261 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:34:11.261 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:11.261 | 99.99th=[42730] 00:34:11.261 bw ( KiB/s): min= 384, max= 800, per=49.02%, avg=728.00, stdev=109.32, samples=20 00:34:11.261 iops : min= 96, max= 200, avg=182.00, stdev=27.33, samples=20 00:34:11.261 lat (usec) : 500=42.00%, 750=5.81% 00:34:11.261 lat (msec) : 50=52.19% 00:34:11.261 cpu : usr=98.73%, sys=0.96%, ctx=36, majf=0, minf=157 00:34:11.261 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.261 issued rwts: total=1824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.261 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:11.261 filename1: (groupid=0, jobs=1): err= 0: pid=1615205: Thu Nov 28 08:32:53 2024 00:34:11.261 read: IOPS=189, BW=757KiB/s (775kB/s)(7584KiB/10016msec) 00:34:11.261 slat (nsec): min=6243, max=33909, avg=8972.62, stdev=5489.01 00:34:11.261 clat (usec): min=412, max=42558, avg=21101.78, stdev=20550.17 00:34:11.261 lat (usec): min=418, max=42565, avg=21110.75, stdev=20548.35 00:34:11.261 clat percentiles (usec): 00:34:11.261 | 1.00th=[ 433], 5.00th=[ 441], 10.00th=[ 445], 20.00th=[ 457], 00:34:11.261 | 30.00th=[ 465], 40.00th=[ 486], 50.00th=[40633], 60.00th=[41681], 00:34:11.261 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:34:11.261 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:11.261 | 99.99th=[42730] 00:34:11.261 bw ( KiB/s): min= 672, max= 768, per=50.90%, avg=756.80, stdev=28.00, samples=20 00:34:11.261 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:34:11.261 lat (usec) : 500=44.78%, 750=5.01% 00:34:11.261 lat (msec) : 50=50.21% 00:34:11.261 cpu : usr=97.20%, sys=2.53%, ctx=9, majf=0, minf=43 00:34:11.261 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.261 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.261 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:11.261 00:34:11.261 Run status group 0 (all jobs): 00:34:11.261 READ: bw=1485KiB/s (1521kB/s), 728KiB/s-757KiB/s (746kB/s-775kB/s), io=14.5MiB (15.2MB), run=10016-10019msec 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.261 00:34:11.261 real 0m11.299s 00:34:11.261 user 0m26.497s 00:34:11.261 sys 0m0.681s 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.261 08:32:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:11.261 ************************************ 00:34:11.261 END TEST fio_dif_1_multi_subsystems 00:34:11.261 ************************************ 00:34:11.261 08:32:53 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:11.261 08:32:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:11.262 08:32:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.262 08:32:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:11.262 ************************************ 00:34:11.262 START TEST fio_dif_rand_params 00:34:11.262 ************************************ 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.262 bdev_null0 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.262 [2024-11-28 08:32:53.507401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:11.262 { 00:34:11.262 "params": { 00:34:11.262 "name": "Nvme$subsystem", 00:34:11.262 "trtype": "$TEST_TRANSPORT", 00:34:11.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.262 "adrfam": "ipv4", 00:34:11.262 "trsvcid": "$NVMF_PORT", 00:34:11.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.262 "hdgst": ${hdgst:-false}, 00:34:11.262 "ddgst": ${ddgst:-false} 00:34:11.262 }, 00:34:11.262 "method": "bdev_nvme_attach_controller" 00:34:11.262 } 00:34:11.262 EOF 00:34:11.262 )") 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:11.262 08:32:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:11.262 "params": { 00:34:11.262 "name": "Nvme0", 00:34:11.262 "trtype": "tcp", 00:34:11.262 "traddr": "10.0.0.2", 00:34:11.262 "adrfam": "ipv4", 00:34:11.262 "trsvcid": "4420", 00:34:11.262 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:11.262 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:11.262 "hdgst": false, 00:34:11.262 "ddgst": false 00:34:11.262 }, 00:34:11.262 "method": "bdev_nvme_attach_controller" 00:34:11.262 }' 00:34:11.544 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:11.544 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:11.544 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:11.544 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.544 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:11.544 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:11.544 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:11.544 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:11.544 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:11.544 08:32:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.809 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:11.809 ... 00:34:11.809 fio-3.35 00:34:11.809 Starting 3 threads 00:34:18.375 00:34:18.375 filename0: (groupid=0, jobs=1): err= 0: pid=1617167: Thu Nov 28 08:32:59 2024 00:34:18.375 read: IOPS=311, BW=38.9MiB/s (40.8MB/s)(196MiB/5044msec) 00:34:18.375 slat (nsec): min=6329, max=38109, avg=11036.80, stdev=2391.89 00:34:18.375 clat (usec): min=3565, max=51605, avg=9575.71, stdev=6524.73 00:34:18.375 lat (usec): min=3573, max=51619, avg=9586.74, stdev=6524.71 00:34:18.375 clat percentiles (usec): 00:34:18.375 | 1.00th=[ 4015], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 7046], 00:34:18.375 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:34:18.375 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10683], 95.00th=[11469], 00:34:18.375 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51643], 00:34:18.375 | 99.99th=[51643] 00:34:18.375 bw ( KiB/s): min=34560, max=49920, per=35.60%, avg=40140.80, stdev=4620.31, samples=10 00:34:18.375 iops : min= 270, max= 390, avg=313.60, stdev=36.10, samples=10 00:34:18.375 lat (msec) : 4=0.96%, 10=79.03%, 20=17.46%, 50=2.04%, 100=0.51% 00:34:18.375 cpu : usr=93.44%, sys=6.27%, ctx=14, majf=0, minf=9 00:34:18.375 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 issued rwts: total=1569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:18.375 filename0: (groupid=0, jobs=1): err= 0: pid=1617168: Thu Nov 28 08:32:59 2024 00:34:18.375 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(186MiB/5019msec) 00:34:18.375 slat (nsec): min=6383, max=26994, avg=10981.06, stdev=2312.14 00:34:18.375 clat (usec): min=3514, max=52626, avg=10090.09, stdev=6372.98 00:34:18.375 lat (usec): min=3523, max=52638, avg=10101.07, stdev=6373.14 00:34:18.375 clat percentiles (usec): 00:34:18.375 | 1.00th=[ 4293], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7177], 00:34:18.375 | 30.00th=[ 8160], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[ 9896], 00:34:18.375 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11994], 95.00th=[12780], 00:34:18.375 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[52691], 00:34:18.375 | 99.99th=[52691] 00:34:18.375 bw ( KiB/s): min=32512, max=44544, per=33.76%, avg=38067.20, stdev=3670.42, samples=10 00:34:18.375 iops : min= 254, max= 348, avg=297.40, stdev=28.68, samples=10 00:34:18.375 lat (msec) : 4=0.74%, 10=62.42%, 20=34.43%, 50=1.81%, 100=0.60% 00:34:18.375 cpu : usr=93.70%, sys=6.00%, ctx=9, majf=0, minf=12 00:34:18.375 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 issued rwts: total=1490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:18.375 filename0: (groupid=0, jobs=1): err= 0: pid=1617169: Thu Nov 28 08:32:59 2024 00:34:18.375 read: IOPS=274, BW=34.3MiB/s (36.0MB/s)(173MiB/5039msec) 00:34:18.375 slat (nsec): min=6400, max=25141, avg=11057.70, stdev=2259.97 00:34:18.375 clat (usec): min=3937, max=52830, avg=10909.85, stdev=8326.11 00:34:18.375 lat (usec): min=3944, max=52844, avg=10920.91, stdev=8326.12 00:34:18.375 clat percentiles (usec): 00:34:18.375 | 1.00th=[ 4113], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[ 7701], 00:34:18.375 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9896], 00:34:18.375 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11863], 95.00th=[13435], 00:34:18.375 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51643], 99.95th=[52691], 00:34:18.375 | 99.99th=[52691] 00:34:18.375 bw ( KiB/s): min=26880, max=41216, per=31.36%, avg=35353.60, stdev=4413.39, samples=10 00:34:18.375 iops : min= 210, max= 322, avg=276.20, stdev=34.48, samples=10 00:34:18.375 lat (msec) : 4=0.14%, 10=62.36%, 20=33.16%, 50=3.03%, 100=1.30% 00:34:18.375 cpu : usr=94.46%, sys=5.26%, ctx=11, majf=0, minf=9 00:34:18.375 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.375 issued rwts: total=1384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:18.375 00:34:18.375 Run status group 0 (all jobs): 00:34:18.375 READ: bw=110MiB/s (115MB/s), 34.3MiB/s-38.9MiB/s (36.0MB/s-40.8MB/s), io=555MiB (582MB), run=5019-5044msec 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.375 bdev_null0 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.375 [2024-11-28 08:32:59.696206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.375 bdev_null1 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.375 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.376 bdev_null2 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:18.376 { 00:34:18.376 "params": { 00:34:18.376 "name": "Nvme$subsystem", 00:34:18.376 "trtype": "$TEST_TRANSPORT", 00:34:18.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.376 "adrfam": "ipv4", 00:34:18.376 "trsvcid": "$NVMF_PORT", 00:34:18.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.376 "hdgst": ${hdgst:-false}, 00:34:18.376 "ddgst": ${ddgst:-false} 00:34:18.376 }, 00:34:18.376 "method": "bdev_nvme_attach_controller" 00:34:18.376 } 00:34:18.376 EOF 00:34:18.376 )") 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:18.376 { 00:34:18.376 "params": { 00:34:18.376 "name": "Nvme$subsystem", 00:34:18.376 "trtype": "$TEST_TRANSPORT", 00:34:18.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.376 "adrfam": "ipv4", 00:34:18.376 "trsvcid": "$NVMF_PORT", 00:34:18.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.376 "hdgst": ${hdgst:-false}, 00:34:18.376 "ddgst": ${ddgst:-false} 00:34:18.376 }, 00:34:18.376 "method": "bdev_nvme_attach_controller" 00:34:18.376 } 00:34:18.376 EOF 00:34:18.376 )") 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:18.376 { 00:34:18.376 "params": { 00:34:18.376 "name": "Nvme$subsystem", 00:34:18.376 "trtype": "$TEST_TRANSPORT", 00:34:18.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.376 "adrfam": "ipv4", 00:34:18.376 "trsvcid": "$NVMF_PORT", 00:34:18.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.376 "hdgst": ${hdgst:-false}, 00:34:18.376 "ddgst": ${ddgst:-false} 00:34:18.376 }, 00:34:18.376 "method": "bdev_nvme_attach_controller" 00:34:18.376 } 00:34:18.376 EOF 00:34:18.376 )") 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:18.376 "params": { 00:34:18.376 "name": "Nvme0", 00:34:18.376 "trtype": "tcp", 00:34:18.376 "traddr": "10.0.0.2", 00:34:18.376 "adrfam": "ipv4", 00:34:18.376 "trsvcid": "4420", 00:34:18.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:18.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:18.376 "hdgst": false, 00:34:18.376 "ddgst": false 00:34:18.376 }, 00:34:18.376 "method": "bdev_nvme_attach_controller" 00:34:18.376 },{ 00:34:18.376 "params": { 00:34:18.376 "name": "Nvme1", 00:34:18.376 "trtype": "tcp", 00:34:18.376 "traddr": "10.0.0.2", 00:34:18.376 "adrfam": "ipv4", 00:34:18.376 "trsvcid": "4420", 00:34:18.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:18.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:18.376 "hdgst": false, 00:34:18.376 "ddgst": false 00:34:18.376 }, 00:34:18.376 "method": "bdev_nvme_attach_controller" 00:34:18.376 },{ 00:34:18.376 "params": { 00:34:18.376 "name": "Nvme2", 00:34:18.376 "trtype": "tcp", 00:34:18.376 "traddr": "10.0.0.2", 00:34:18.376 "adrfam": "ipv4", 00:34:18.376 "trsvcid": "4420", 00:34:18.376 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:18.376 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:18.376 "hdgst": false, 00:34:18.376 "ddgst": false 00:34:18.376 }, 00:34:18.376 "method": "bdev_nvme_attach_controller" 00:34:18.376 }' 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:18.376 08:32:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.376 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:18.377 ... 00:34:18.377 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:18.377 ... 00:34:18.377 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:18.377 ... 00:34:18.377 fio-3.35 00:34:18.377 Starting 24 threads 00:34:30.576 00:34:30.576 filename0: (groupid=0, jobs=1): err= 0: pid=1618489: Thu Nov 28 08:33:11 2024 00:34:30.576 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10003msec) 00:34:30.576 slat (usec): min=7, max=100, avg=41.06, stdev=21.99 00:34:30.576 clat (usec): min=11243, max=29761, avg=28237.84, stdev=1241.08 00:34:30.576 lat (usec): min=11266, max=29773, avg=28278.90, stdev=1239.87 00:34:30.576 clat percentiles (usec): 00:34:30.576 | 1.00th=[27395], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:34:30.576 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:34:30.576 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:34:30.576 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:34:30.576 | 99.99th=[29754] 00:34:30.576 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2236.37, stdev=65.39, samples=19 00:34:30.576 iops : min= 544, max= 576, avg=559.05, stdev=16.31, samples=19 00:34:30.576 lat (msec) : 20=0.73%, 50=99.27% 00:34:30.576 cpu : usr=98.39%, sys=1.22%, ctx=13, majf=0, minf=9 00:34:30.576 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.576 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.576 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.576 filename0: (groupid=0, jobs=1): err= 0: pid=1618490: Thu Nov 28 08:33:11 2024 00:34:30.576 read: IOPS=560, BW=2242KiB/s (2296kB/s)(21.9MiB/10017msec) 00:34:30.576 slat (nsec): min=3422, max=89373, avg=28598.11, stdev=20208.36 00:34:30.576 clat (usec): min=13894, max=42689, avg=28249.71, stdev=1947.84 00:34:30.576 lat (usec): min=13902, max=42704, avg=28278.31, stdev=1948.91 00:34:30.576 clat percentiles (usec): 00:34:30.576 | 1.00th=[17957], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:34:30.576 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:34:30.576 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:34:30.576 | 99.00th=[34341], 99.50th=[39060], 99.90th=[42730], 99.95th=[42730], 00:34:30.576 | 99.99th=[42730] 00:34:30.576 bw ( KiB/s): min= 2144, max= 2304, per=4.15%, avg=2230.68, stdev=65.47, samples=19 00:34:30.576 iops : min= 536, max= 576, avg=557.63, stdev=16.37, samples=19 00:34:30.576 lat (msec) : 20=1.43%, 50=98.57% 00:34:30.576 cpu : usr=98.63%, sys=0.99%, ctx=13, majf=0, minf=9 00:34:30.576 IO depths : 1=5.3%, 2=10.9%, 4=22.8%, 8=53.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:34:30.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.576 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.576 issued rwts: total=5614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.576 filename0: (groupid=0, jobs=1): err= 0: pid=1618491: Thu Nov 28 08:33:11 2024 00:34:30.576 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10003msec) 00:34:30.576 slat (usec): min=9, max=100, avg=42.16, stdev=22.44 00:34:30.576 clat (usec): min=11411, max=29618, avg=28154.25, stdev=1231.54 00:34:30.576 lat (usec): min=11425, max=29672, avg=28196.41, stdev=1233.23 00:34:30.576 clat percentiles (usec): 00:34:30.576 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:34:30.576 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:34:30.576 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:34:30.576 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:34:30.576 | 99.99th=[29492] 00:34:30.576 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2236.37, stdev=65.39, samples=19 00:34:30.576 iops : min= 544, max= 576, avg=559.05, stdev=16.31, samples=19 00:34:30.576 lat (msec) : 20=0.80%, 50=99.20% 00:34:30.576 cpu : usr=98.75%, sys=0.88%, ctx=12, majf=0, minf=9 00:34:30.576 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.576 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.576 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.576 filename0: (groupid=0, jobs=1): err= 0: pid=1618492: Thu Nov 28 08:33:11 2024 00:34:30.576 read: IOPS=568, BW=2274KiB/s (2328kB/s)(22.2MiB/10021msec) 00:34:30.576 slat (nsec): min=6787, max=88022, avg=24094.31, stdev=20021.88 00:34:30.576 clat (usec): min=2348, max=29817, avg=27973.67, stdev=3437.59 00:34:30.576 lat (usec): min=2356, max=29831, avg=27997.77, stdev=3437.96 00:34:30.576 clat percentiles (usec): 00:34:30.576 | 1.00th=[ 3589], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:34:30.576 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.576 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:34:30.576 | 99.00th=[29492], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:34:30.576 | 99.99th=[29754] 00:34:30.576 bw ( KiB/s): min= 2171, max= 3072, per=4.23%, avg=2271.50, stdev=198.88, samples=20 00:34:30.577 iops : min= 542, max= 768, avg=567.80, stdev=49.76, samples=20 00:34:30.577 lat (msec) : 4=1.28%, 10=0.40%, 20=0.84%, 50=97.47% 00:34:30.577 cpu : usr=98.62%, sys=0.99%, ctx=33, majf=0, minf=9 00:34:30.577 IO depths : 1=6.2%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:30.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.577 filename0: (groupid=0, jobs=1): err= 0: pid=1618493: Thu Nov 28 08:33:11 2024 00:34:30.577 read: IOPS=559, BW=2236KiB/s (2290kB/s)(21.9MiB/10016msec) 00:34:30.577 slat (nsec): min=7015, max=91291, avg=42117.30, stdev=22299.25 00:34:30.577 clat (usec): min=15710, max=44345, avg=28201.05, stdev=1589.65 00:34:30.577 lat (usec): min=15731, max=44359, avg=28243.17, stdev=1592.38 00:34:30.577 clat percentiles (usec): 00:34:30.577 | 1.00th=[19792], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:34:30.577 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:34:30.577 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:34:30.577 | 99.00th=[34341], 99.50th=[37487], 99.90th=[38536], 99.95th=[39060], 00:34:30.577 | 99.99th=[44303] 00:34:30.577 bw ( KiB/s): min= 2171, max= 2304, per=4.15%, avg=2229.63, stdev=63.62, samples=19 00:34:30.577 iops : min= 542, max= 576, avg=557.37, stdev=15.95, samples=19 00:34:30.577 lat (msec) : 20=1.29%, 50=98.71% 00:34:30.577 cpu : usr=98.71%, sys=0.90%, ctx=7, majf=0, minf=9 00:34:30.577 IO depths : 1=5.6%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:30.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.577 filename0: (groupid=0, jobs=1): err= 0: pid=1618494: Thu Nov 28 08:33:11 2024 00:34:30.577 read: IOPS=557, BW=2232KiB/s (2285kB/s)(21.8MiB/10009msec) 00:34:30.577 slat (nsec): min=7106, max=91396, avg=29270.29, stdev=18987.82 00:34:30.577 clat (usec): min=14974, max=50334, avg=28474.49, stdev=1236.79 00:34:30.577 lat (usec): min=15051, max=50355, avg=28503.76, stdev=1234.63 00:34:30.577 clat percentiles (usec): 00:34:30.577 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:34:30.577 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.577 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:34:30.577 | 99.00th=[29492], 99.50th=[29492], 99.90th=[43779], 99.95th=[43779], 00:34:30.577 | 99.99th=[50594] 00:34:30.577 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2222.89, stdev=76.63, samples=19 00:34:30.577 iops : min= 512, max= 576, avg=555.68, stdev=19.19, samples=19 00:34:30.577 lat (msec) : 20=0.29%, 50=99.68%, 100=0.04% 00:34:30.577 cpu : usr=98.39%, sys=1.23%, ctx=14, majf=0, minf=9 00:34:30.577 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.577 filename0: (groupid=0, jobs=1): err= 0: pid=1618495: Thu Nov 28 08:33:11 2024 00:34:30.577 read: IOPS=556, BW=2225KiB/s (2278kB/s)(21.7MiB/10003msec) 00:34:30.577 slat (nsec): min=6992, max=85757, avg=25126.40, stdev=17329.59 00:34:30.577 clat (usec): min=14868, max=53743, avg=28527.35, stdev=2398.18 00:34:30.577 lat (usec): min=14882, max=53757, avg=28552.48, stdev=2397.26 00:34:30.577 clat percentiles (usec): 00:34:30.577 | 1.00th=[17957], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:34:30.577 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.577 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:34:30.577 | 99.00th=[39584], 99.50th=[40109], 99.90th=[53740], 99.95th=[53740], 00:34:30.577 | 99.99th=[53740] 00:34:30.577 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2221.21, stdev=72.87, samples=19 00:34:30.577 iops : min= 512, max= 576, avg=555.26, stdev=18.17, samples=19 00:34:30.577 lat (msec) : 20=1.33%, 50=98.38%, 100=0.29% 00:34:30.577 cpu : usr=98.55%, sys=1.07%, ctx=13, majf=0, minf=9 00:34:30.577 IO depths : 1=5.2%, 2=10.8%, 4=22.7%, 8=53.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:34:30.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 issued rwts: total=5564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.577 filename0: (groupid=0, jobs=1): err= 0: pid=1618496: Thu Nov 28 08:33:11 2024 00:34:30.577 read: IOPS=557, BW=2231KiB/s (2285kB/s)(21.8MiB/10010msec) 00:34:30.577 slat (nsec): min=6238, max=73440, avg=30118.71, stdev=13626.49 00:34:30.577 clat (usec): min=15014, max=50305, avg=28406.70, stdev=1227.71 00:34:30.577 lat (usec): min=15037, max=50323, avg=28436.82, stdev=1227.14 00:34:30.577 clat percentiles (usec): 00:34:30.577 | 1.00th=[27919], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:34:30.577 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.577 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:34:30.577 | 99.00th=[29230], 99.50th=[29492], 99.90th=[43779], 99.95th=[43779], 00:34:30.577 | 99.99th=[50070] 00:34:30.577 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2222.89, stdev=76.63, samples=19 00:34:30.577 iops : min= 512, max= 576, avg=555.68, stdev=19.19, samples=19 00:34:30.577 lat (msec) : 20=0.29%, 50=99.68%, 100=0.04% 00:34:30.577 cpu : usr=98.28%, sys=1.04%, ctx=107, majf=0, minf=9 00:34:30.577 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.577 filename1: (groupid=0, jobs=1): err= 0: pid=1618497: Thu Nov 28 08:33:11 2024 00:34:30.577 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.3MiB/10021msec) 00:34:30.577 slat (nsec): min=3295, max=74671, avg=10800.53, stdev=4743.81 00:34:30.577 clat (usec): min=1395, max=36929, avg=27994.87, stdev=3645.61 00:34:30.577 lat (usec): min=1402, max=36938, avg=28005.67, stdev=3645.52 00:34:30.577 clat percentiles (usec): 00:34:30.577 | 1.00th=[ 3982], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:34:30.577 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:34:30.577 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:34:30.577 | 99.00th=[29492], 99.50th=[29492], 99.90th=[29754], 99.95th=[34866], 00:34:30.577 | 99.99th=[36963] 00:34:30.577 bw ( KiB/s): min= 2171, max= 3168, per=4.24%, avg=2276.55, stdev=219.20, samples=20 00:34:30.577 iops : min= 542, max= 792, avg=569.10, stdev=54.82, samples=20 00:34:30.577 lat (msec) : 2=0.28%, 4=0.81%, 10=0.81%, 20=0.84%, 50=97.27% 00:34:30.577 cpu : usr=98.59%, sys=1.03%, ctx=9, majf=0, minf=9 00:34:30.577 IO depths : 1=6.0%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:30.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.577 issued rwts: total=5708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.577 filename1: (groupid=0, jobs=1): err= 0: pid=1618498: Thu Nov 28 08:33:11 2024 00:34:30.577 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10003msec) 00:34:30.577 slat (nsec): min=7245, max=93648, avg=34121.85, stdev=22315.77 00:34:30.577 clat (usec): min=11418, max=29623, avg=28335.88, stdev=1246.49 00:34:30.577 lat (usec): min=11444, max=29639, avg=28370.00, stdev=1242.85 00:34:30.577 clat percentiles (usec): 00:34:30.577 | 1.00th=[27132], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:30.577 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.578 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:34:30.578 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29492], 99.95th=[29492], 00:34:30.578 | 99.99th=[29754] 00:34:30.578 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2236.37, stdev=65.39, samples=19 00:34:30.578 iops : min= 544, max= 576, avg=559.05, stdev=16.31, samples=19 00:34:30.578 lat (msec) : 20=0.82%, 50=99.18% 00:34:30.578 cpu : usr=98.73%, sys=0.88%, ctx=13, majf=0, minf=9 00:34:30.578 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.578 filename1: (groupid=0, jobs=1): err= 0: pid=1618499: Thu Nov 28 08:33:11 2024 00:34:30.578 read: IOPS=557, BW=2231KiB/s (2285kB/s)(21.8MiB/10011msec) 00:34:30.578 slat (nsec): min=6224, max=87599, avg=36172.72, stdev=20847.47 00:34:30.578 clat (usec): min=14851, max=45569, avg=28422.05, stdev=1268.24 00:34:30.578 lat (usec): min=14915, max=45586, avg=28458.22, stdev=1265.23 00:34:30.578 clat percentiles (usec): 00:34:30.578 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:30.578 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.578 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:34:30.578 | 99.00th=[29492], 99.50th=[29492], 99.90th=[45351], 99.95th=[45351], 00:34:30.578 | 99.99th=[45351] 00:34:30.578 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2223.16, stdev=76.45, samples=19 00:34:30.578 iops : min= 512, max= 576, avg=555.79, stdev=19.11, samples=19 00:34:30.578 lat (msec) : 20=0.29%, 50=99.71% 00:34:30.578 cpu : usr=98.68%, sys=0.95%, ctx=13, majf=0, minf=9 00:34:30.578 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.578 filename1: (groupid=0, jobs=1): err= 0: pid=1618500: Thu Nov 28 08:33:11 2024 00:34:30.578 read: IOPS=557, BW=2232KiB/s (2285kB/s)(21.8MiB/10009msec) 00:34:30.578 slat (nsec): min=4182, max=87444, avg=38456.45, stdev=21483.94 00:34:30.578 clat (usec): min=14859, max=43145, avg=28307.30, stdev=1175.49 00:34:30.578 lat (usec): min=14886, max=43159, avg=28345.75, stdev=1174.93 00:34:30.578 clat percentiles (usec): 00:34:30.578 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:34:30.578 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:34:30.578 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:34:30.578 | 99.00th=[29230], 99.50th=[29492], 99.90th=[43254], 99.95th=[43254], 00:34:30.578 | 99.99th=[43254] 00:34:30.578 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2223.11, stdev=76.13, samples=19 00:34:30.578 iops : min= 513, max= 576, avg=555.74, stdev=19.06, samples=19 00:34:30.578 lat (msec) : 20=0.29%, 50=99.71% 00:34:30.578 cpu : usr=98.75%, sys=0.85%, ctx=12, majf=0, minf=9 00:34:30.578 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.578 filename1: (groupid=0, jobs=1): err= 0: pid=1618501: Thu Nov 28 08:33:11 2024 00:34:30.578 read: IOPS=564, BW=2257KiB/s (2312kB/s)(22.1MiB/10004msec) 00:34:30.578 slat (nsec): min=4876, max=87805, avg=21849.27, stdev=13950.81 00:34:30.578 clat (usec): min=11404, max=45532, avg=28155.66, stdev=2544.76 00:34:30.578 lat (usec): min=11414, max=45546, avg=28177.51, stdev=2545.65 00:34:30.578 clat percentiles (usec): 00:34:30.578 | 1.00th=[17957], 5.00th=[23462], 10.00th=[27919], 20.00th=[28181], 00:34:30.578 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.578 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:34:30.578 | 99.00th=[35390], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:34:30.578 | 99.99th=[45351] 00:34:30.578 bw ( KiB/s): min= 2052, max= 2704, per=4.18%, avg=2249.21, stdev=131.15, samples=19 00:34:30.578 iops : min= 513, max= 676, avg=562.26, stdev=32.81, samples=19 00:34:30.578 lat (msec) : 20=2.16%, 50=97.84% 00:34:30.578 cpu : usr=98.67%, sys=0.95%, ctx=18, majf=0, minf=11 00:34:30.578 IO depths : 1=5.0%, 2=10.6%, 4=23.0%, 8=53.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:34:30.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 issued rwts: total=5646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.578 filename1: (groupid=0, jobs=1): err= 0: pid=1618502: Thu Nov 28 08:33:11 2024 00:34:30.578 read: IOPS=567, BW=2269KiB/s (2324kB/s)(22.2MiB/10004msec) 00:34:30.578 slat (nsec): min=5879, max=88510, avg=16259.17, stdev=11590.05 00:34:30.578 clat (usec): min=8877, max=53638, avg=28104.15, stdev=3487.73 00:34:30.578 lat (usec): min=8884, max=53658, avg=28120.41, stdev=3487.43 00:34:30.578 clat percentiles (usec): 00:34:30.578 | 1.00th=[17957], 5.00th=[22414], 10.00th=[23462], 20.00th=[27919], 00:34:30.578 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.578 | 70.00th=[28705], 80.00th=[28705], 90.00th=[31851], 95.00th=[33817], 00:34:30.578 | 99.00th=[35390], 99.50th=[39584], 99.90th=[53740], 99.95th=[53740], 00:34:30.578 | 99.99th=[53740] 00:34:30.578 bw ( KiB/s): min= 2048, max= 2368, per=4.20%, avg=2258.26, stdev=75.39, samples=19 00:34:30.578 iops : min= 512, max= 592, avg=564.53, stdev=18.81, samples=19 00:34:30.578 lat (msec) : 10=0.18%, 20=2.15%, 50=97.39%, 100=0.28% 00:34:30.578 cpu : usr=98.56%, sys=1.06%, ctx=13, majf=0, minf=9 00:34:30.578 IO depths : 1=1.9%, 2=3.8%, 4=9.0%, 8=72.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:34:30.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 complete : 0=0.0%, 4=90.3%, 8=6.7%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 issued rwts: total=5676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.578 filename1: (groupid=0, jobs=1): err= 0: pid=1618503: Thu Nov 28 08:33:11 2024 00:34:30.578 read: IOPS=558, BW=2233KiB/s (2287kB/s)(21.8MiB/10002msec) 00:34:30.578 slat (nsec): min=7064, max=87484, avg=25370.53, stdev=17931.61 00:34:30.578 clat (usec): min=20827, max=36790, avg=28471.79, stdev=592.06 00:34:30.578 lat (usec): min=20834, max=36811, avg=28497.16, stdev=589.15 00:34:30.578 clat percentiles (usec): 00:34:30.578 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:34:30.578 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.578 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:34:30.578 | 99.00th=[29492], 99.50th=[29492], 99.90th=[29754], 99.95th=[30278], 00:34:30.578 | 99.99th=[36963] 00:34:30.578 bw ( KiB/s): min= 2171, max= 2304, per=4.15%, avg=2229.63, stdev=65.17, samples=19 00:34:30.578 iops : min= 542, max= 576, avg=557.37, stdev=16.33, samples=19 00:34:30.578 lat (msec) : 50=100.00% 00:34:30.578 cpu : usr=98.75%, sys=0.87%, ctx=8, majf=0, minf=9 00:34:30.578 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.578 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.578 filename1: (groupid=0, jobs=1): err= 0: pid=1618504: Thu Nov 28 08:33:11 2024 00:34:30.579 read: IOPS=558, BW=2233KiB/s (2286kB/s)(21.8MiB/10004msec) 00:34:30.579 slat (nsec): min=6631, max=46062, avg=18279.30, stdev=5405.68 00:34:30.579 clat (usec): min=4110, max=53782, avg=28495.79, stdev=2078.52 00:34:30.579 lat (usec): min=4116, max=53799, avg=28514.07, stdev=2078.59 00:34:30.579 clat percentiles (usec): 00:34:30.579 | 1.00th=[26870], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:34:30.579 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.579 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:34:30.579 | 99.00th=[29492], 99.50th=[29754], 99.90th=[53740], 99.95th=[53740], 00:34:30.579 | 99.99th=[53740] 00:34:30.579 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2222.89, stdev=76.16, samples=19 00:34:30.579 iops : min= 512, max= 576, avg=555.68, stdev=19.00, samples=19 00:34:30.579 lat (msec) : 10=0.29%, 20=0.45%, 50=98.98%, 100=0.29% 00:34:30.579 cpu : usr=98.66%, sys=0.95%, ctx=12, majf=0, minf=9 00:34:30.579 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.579 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.579 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.579 filename2: (groupid=0, jobs=1): err= 0: pid=1618505: Thu Nov 28 08:33:11 2024 00:34:30.579 read: IOPS=558, BW=2232KiB/s (2286kB/s)(21.8MiB/10007msec) 00:34:30.579 slat (nsec): min=7327, max=89772, avg=41865.01, stdev=22708.82 00:34:30.579 clat (usec): min=20545, max=31880, avg=28230.62, stdev=551.72 00:34:30.579 lat (usec): min=20557, max=31901, avg=28272.48, stdev=558.18 00:34:30.579 clat percentiles (usec): 00:34:30.579 | 1.00th=[27395], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:34:30.579 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:30.579 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:34:30.579 | 99.00th=[29230], 99.50th=[29492], 99.90th=[31851], 99.95th=[31851], 00:34:30.579 | 99.99th=[31851] 00:34:30.579 bw ( KiB/s): min= 2171, max= 2304, per=4.15%, avg=2229.37, stdev=64.86, samples=19 00:34:30.579 iops : min= 542, max= 576, avg=557.26, stdev=16.21, samples=19 00:34:30.579 lat (msec) : 50=100.00% 00:34:30.579 cpu : usr=98.48%, sys=1.13%, ctx=17, majf=0, minf=9 00:34:30.579 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.579 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.579 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.579 filename2: (groupid=0, jobs=1): err= 0: pid=1618506: Thu Nov 28 08:33:11 2024 00:34:30.579 read: IOPS=561, BW=2248KiB/s (2302kB/s)(22.0MiB/10018msec) 00:34:30.579 slat (nsec): min=3335, max=77334, avg=17930.14, stdev=5970.26 00:34:30.579 clat (usec): min=15993, max=40111, avg=28313.09, stdev=2142.60 00:34:30.579 lat (usec): min=16002, max=40118, avg=28331.02, stdev=2142.80 00:34:30.579 clat percentiles (usec): 00:34:30.579 | 1.00th=[17695], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:34:30.579 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.579 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:34:30.579 | 99.00th=[36439], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:34:30.579 | 99.99th=[40109] 00:34:30.579 bw ( KiB/s): min= 2171, max= 2368, per=4.17%, avg=2242.26, stdev=72.86, samples=19 00:34:30.579 iops : min= 542, max= 592, avg=560.53, stdev=18.26, samples=19 00:34:30.579 lat (msec) : 20=2.43%, 50=97.57% 00:34:30.579 cpu : usr=98.56%, sys=1.06%, ctx=13, majf=0, minf=9 00:34:30.579 IO depths : 1=5.2%, 2=11.0%, 4=23.5%, 8=52.8%, 16=7.5%, 32=0.0%, >=64=0.0% 00:34:30.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.579 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.579 issued rwts: total=5630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.579 filename2: (groupid=0, jobs=1): err= 0: pid=1618507: Thu Nov 28 08:33:11 2024 00:34:30.579 read: IOPS=557, BW=2231KiB/s (2285kB/s)(21.8MiB/10010msec) 00:34:30.579 slat (nsec): min=7085, max=87494, avg=38376.28, stdev=20697.96 00:34:30.579 clat (usec): min=14793, max=50359, avg=28372.46, stdev=1567.33 00:34:30.579 lat (usec): min=14806, max=50376, avg=28410.84, stdev=1566.09 00:34:30.579 clat percentiles (usec): 00:34:30.579 | 1.00th=[21890], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:34:30.579 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:34:30.579 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:34:30.579 | 99.00th=[34866], 99.50th=[35390], 99.90th=[43779], 99.95th=[43779], 00:34:30.579 | 99.99th=[50594] 00:34:30.579 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2222.89, stdev=76.63, samples=19 00:34:30.579 iops : min= 512, max= 576, avg=555.68, stdev=19.19, samples=19 00:34:30.579 lat (msec) : 20=0.29%, 50=99.68%, 100=0.04% 00:34:30.579 cpu : usr=98.61%, sys=1.01%, ctx=10, majf=0, minf=9 00:34:30.579 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:30.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.579 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.579 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.579 filename2: (groupid=0, jobs=1): err= 0: pid=1618508: Thu Nov 28 08:33:11 2024 00:34:30.579 read: IOPS=567, BW=2271KiB/s (2325kB/s)(22.2MiB/10006msec) 00:34:30.579 slat (nsec): min=5853, max=65202, avg=12916.08, stdev=5494.14 00:34:30.579 clat (usec): min=2840, max=29778, avg=28077.63, stdev=3307.63 00:34:30.579 lat (usec): min=2848, max=29792, avg=28090.55, stdev=3307.37 00:34:30.579 clat percentiles (usec): 00:34:30.579 | 1.00th=[ 4621], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:34:30.579 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:34:30.579 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:34:30.579 | 99.00th=[29492], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:34:30.579 | 99.99th=[29754] 00:34:30.579 bw ( KiB/s): min= 2176, max= 2944, per=4.22%, avg=2270.05, stdev=175.05, samples=19 00:34:30.579 iops : min= 544, max= 736, avg=567.47, stdev=43.76, samples=19 00:34:30.579 lat (msec) : 4=0.63%, 10=1.11%, 20=0.51%, 50=97.75% 00:34:30.579 cpu : usr=98.48%, sys=1.12%, ctx=13, majf=0, minf=9 00:34:30.579 IO depths : 1=6.1%, 2=12.3%, 4=24.5%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:30.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.579 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.579 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.579 filename2: (groupid=0, jobs=1): err= 0: pid=1618509: Thu Nov 28 08:33:11 2024 00:34:30.579 read: IOPS=556, BW=2227KiB/s (2280kB/s)(21.8MiB/10001msec) 00:34:30.579 slat (nsec): min=6995, max=87507, avg=35277.17, stdev=22001.29 00:34:30.579 clat (usec): min=14878, max=64562, avg=28382.70, stdev=2058.47 00:34:30.579 lat (usec): min=14892, max=64605, avg=28417.98, stdev=2058.61 00:34:30.579 clat percentiles (usec): 00:34:30.579 | 1.00th=[21103], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:34:30.579 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:34:30.579 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:34:30.579 | 99.00th=[38536], 99.50th=[39584], 99.90th=[52691], 99.95th=[53216], 00:34:30.579 | 99.99th=[64750] 00:34:30.580 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2223.11, stdev=75.66, samples=19 00:34:30.580 iops : min= 513, max= 576, avg=555.74, stdev=18.87, samples=19 00:34:30.580 lat (msec) : 20=0.75%, 50=98.96%, 100=0.29% 00:34:30.580 cpu : usr=98.60%, sys=1.03%, ctx=12, majf=0, minf=9 00:34:30.580 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:30.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.580 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.580 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.580 filename2: (groupid=0, jobs=1): err= 0: pid=1618510: Thu Nov 28 08:33:11 2024 00:34:30.580 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10003msec) 00:34:30.580 slat (usec): min=7, max=100, avg=25.77, stdev=20.67 00:34:30.580 clat (usec): min=11228, max=29968, avg=28374.52, stdev=1261.17 00:34:30.580 lat (usec): min=11242, max=29981, avg=28400.28, stdev=1256.71 00:34:30.580 clat percentiles (usec): 00:34:30.580 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:34:30.580 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:34:30.580 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:34:30.580 | 99.00th=[29492], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:34:30.580 | 99.99th=[30016] 00:34:30.580 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2236.37, stdev=65.39, samples=19 00:34:30.580 iops : min= 544, max= 576, avg=559.05, stdev=16.31, samples=19 00:34:30.580 lat (msec) : 20=0.82%, 50=99.18% 00:34:30.580 cpu : usr=98.48%, sys=1.12%, ctx=17, majf=0, minf=9 00:34:30.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.580 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.580 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.580 filename2: (groupid=0, jobs=1): err= 0: pid=1618511: Thu Nov 28 08:33:11 2024 00:34:30.580 read: IOPS=558, BW=2232KiB/s (2286kB/s)(21.8MiB/10006msec) 00:34:30.580 slat (usec): min=3, max=106, avg=42.15, stdev=22.67 00:34:30.580 clat (usec): min=12967, max=46709, avg=28230.09, stdev=1375.13 00:34:30.580 lat (usec): min=12983, max=46719, avg=28272.24, stdev=1377.14 00:34:30.580 clat percentiles (usec): 00:34:30.580 | 1.00th=[27395], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:34:30.580 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:30.580 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:34:30.580 | 99.00th=[29230], 99.50th=[29492], 99.90th=[46924], 99.95th=[46924], 00:34:30.580 | 99.99th=[46924] 00:34:30.580 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2222.89, stdev=76.63, samples=19 00:34:30.580 iops : min= 512, max= 576, avg=555.68, stdev=19.19, samples=19 00:34:30.580 lat (msec) : 20=0.29%, 50=99.71% 00:34:30.580 cpu : usr=98.55%, sys=1.06%, ctx=15, majf=0, minf=9 00:34:30.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.580 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.580 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.580 filename2: (groupid=0, jobs=1): err= 0: pid=1618512: Thu Nov 28 08:33:11 2024 00:34:30.580 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10003msec) 00:34:30.580 slat (nsec): min=7588, max=72655, avg=34030.82, stdev=14066.98 00:34:30.580 clat (usec): min=11514, max=29698, avg=28276.94, stdev=1218.42 00:34:30.580 lat (usec): min=11564, max=29727, avg=28310.97, stdev=1218.50 00:34:30.580 clat percentiles (usec): 00:34:30.580 | 1.00th=[27395], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:34:30.580 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:34:30.580 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:34:30.580 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29492], 99.95th=[29754], 00:34:30.580 | 99.99th=[29754] 00:34:30.580 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2236.37, stdev=65.39, samples=19 00:34:30.580 iops : min= 544, max= 576, avg=559.05, stdev=16.31, samples=19 00:34:30.580 lat (msec) : 20=0.77%, 50=99.23% 00:34:30.580 cpu : usr=98.16%, sys=1.15%, ctx=61, majf=0, minf=9 00:34:30.580 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.580 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.580 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.580 00:34:30.580 Run status group 0 (all jobs): 00:34:30.580 READ: bw=52.5MiB/s (55.0MB/s), 2225KiB/s-2278KiB/s (2278kB/s-2333kB/s), io=526MiB (551MB), run=10001-10021msec 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:30.580 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.581 bdev_null0 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.581 [2024-11-28 08:33:11.375487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.581 bdev_null1 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:30.581 { 00:34:30.581 "params": { 00:34:30.581 "name": "Nvme$subsystem", 00:34:30.581 "trtype": "$TEST_TRANSPORT", 00:34:30.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:30.581 "adrfam": "ipv4", 00:34:30.581 "trsvcid": "$NVMF_PORT", 00:34:30.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:30.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:30.581 "hdgst": ${hdgst:-false}, 00:34:30.581 "ddgst": ${ddgst:-false} 00:34:30.581 }, 00:34:30.581 "method": "bdev_nvme_attach_controller" 00:34:30.581 } 00:34:30.581 EOF 00:34:30.581 )") 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:30.581 08:33:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:30.581 { 00:34:30.581 "params": { 00:34:30.581 "name": "Nvme$subsystem", 00:34:30.581 "trtype": "$TEST_TRANSPORT", 00:34:30.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:30.581 "adrfam": "ipv4", 00:34:30.581 "trsvcid": "$NVMF_PORT", 00:34:30.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:30.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:30.581 "hdgst": ${hdgst:-false}, 00:34:30.581 "ddgst": ${ddgst:-false} 00:34:30.581 }, 00:34:30.581 "method": "bdev_nvme_attach_controller" 00:34:30.581 } 00:34:30.581 EOF 00:34:30.581 )") 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:30.582 "params": { 00:34:30.582 "name": "Nvme0", 00:34:30.582 "trtype": "tcp", 00:34:30.582 "traddr": "10.0.0.2", 00:34:30.582 "adrfam": "ipv4", 00:34:30.582 "trsvcid": "4420", 00:34:30.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:30.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:30.582 "hdgst": false, 00:34:30.582 "ddgst": false 00:34:30.582 }, 00:34:30.582 "method": "bdev_nvme_attach_controller" 00:34:30.582 },{ 00:34:30.582 "params": { 00:34:30.582 "name": "Nvme1", 00:34:30.582 "trtype": "tcp", 00:34:30.582 "traddr": "10.0.0.2", 00:34:30.582 "adrfam": "ipv4", 00:34:30.582 "trsvcid": "4420", 00:34:30.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:30.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:30.582 "hdgst": false, 00:34:30.582 "ddgst": false 00:34:30.582 }, 00:34:30.582 "method": "bdev_nvme_attach_controller" 00:34:30.582 }' 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:30.582 08:33:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:30.582 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:30.582 ... 00:34:30.582 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:30.582 ... 00:34:30.582 fio-3.35 00:34:30.582 Starting 4 threads 00:34:35.857 00:34:35.857 filename0: (groupid=0, jobs=1): err= 0: pid=1620873: Thu Nov 28 08:33:17 2024 00:34:35.857 read: IOPS=2666, BW=20.8MiB/s (21.8MB/s)(104MiB/5003msec) 00:34:35.857 slat (usec): min=6, max=211, avg=12.05, stdev= 6.99 00:34:35.857 clat (usec): min=751, max=5459, avg=2960.81, stdev=433.05 00:34:35.857 lat (usec): min=762, max=5467, avg=2972.86, stdev=433.72 00:34:35.857 clat percentiles (usec): 00:34:35.857 | 1.00th=[ 1893], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2638], 00:34:35.857 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3064], 00:34:35.857 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3392], 95.00th=[ 3621], 00:34:35.857 | 99.00th=[ 4359], 99.50th=[ 4752], 99.90th=[ 5145], 99.95th=[ 5211], 00:34:35.857 | 99.99th=[ 5276] 00:34:35.857 bw ( KiB/s): min=20384, max=23312, per=25.76%, avg=21256.89, stdev=889.19, samples=9 00:34:35.857 iops : min= 2548, max= 2914, avg=2657.11, stdev=111.15, samples=9 00:34:35.857 lat (usec) : 1000=0.02% 00:34:35.857 lat (msec) : 2=1.48%, 4=96.23%, 10=2.26% 00:34:35.857 cpu : usr=96.72%, sys=2.90%, ctx=11, majf=0, minf=9 00:34:35.857 IO depths : 1=0.3%, 2=11.8%, 4=60.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.857 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.857 issued rwts: total=13342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.857 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.857 filename0: (groupid=0, jobs=1): err= 0: pid=1620874: Thu Nov 28 08:33:17 2024 00:34:35.857 read: IOPS=2508, BW=19.6MiB/s (20.6MB/s)(98.0MiB/5001msec) 00:34:35.857 slat (nsec): min=6243, max=58461, avg=11571.35, stdev=6500.35 00:34:35.858 clat (usec): min=639, max=6216, avg=3154.77, stdev=516.54 00:34:35.858 lat (usec): min=649, max=6223, avg=3166.34, stdev=516.22 00:34:35.858 clat percentiles (usec): 00:34:35.858 | 1.00th=[ 2089], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2835], 00:34:35.858 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3097], 60.00th=[ 3163], 00:34:35.858 | 70.00th=[ 3228], 80.00th=[ 3392], 90.00th=[ 3720], 95.00th=[ 4146], 00:34:35.858 | 99.00th=[ 5145], 99.50th=[ 5407], 99.90th=[ 5735], 99.95th=[ 5866], 00:34:35.858 | 99.99th=[ 6194] 00:34:35.858 bw ( KiB/s): min=18688, max=20976, per=24.20%, avg=19968.78, stdev=695.64, samples=9 00:34:35.858 iops : min= 2336, max= 2622, avg=2496.00, stdev=86.95, samples=9 00:34:35.858 lat (usec) : 750=0.01% 00:34:35.858 lat (msec) : 2=0.67%, 4=93.11%, 10=6.21% 00:34:35.858 cpu : usr=96.84%, sys=2.84%, ctx=9, majf=0, minf=9 00:34:35.858 IO depths : 1=0.1%, 2=5.3%, 4=66.6%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.858 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.858 issued rwts: total=12547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.858 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.858 filename1: (groupid=0, jobs=1): err= 0: pid=1620875: Thu Nov 28 08:33:17 2024 00:34:35.858 read: IOPS=2529, BW=19.8MiB/s (20.7MB/s)(98.8MiB/5001msec) 00:34:35.858 slat (nsec): min=6254, max=64515, avg=14620.92, stdev=9724.89 00:34:35.858 clat (usec): min=654, max=5792, avg=3115.27, stdev=473.54 00:34:35.858 lat (usec): min=681, max=5804, avg=3129.89, stdev=474.00 00:34:35.858 clat percentiles (usec): 00:34:35.858 | 1.00th=[ 2008], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2835], 00:34:35.858 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:34:35.858 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3654], 95.00th=[ 3982], 00:34:35.858 | 99.00th=[ 4686], 99.50th=[ 5080], 99.90th=[ 5473], 99.95th=[ 5604], 00:34:35.858 | 99.99th=[ 5800] 00:34:35.858 bw ( KiB/s): min=19712, max=21360, per=24.59%, avg=20289.78, stdev=512.78, samples=9 00:34:35.858 iops : min= 2464, max= 2670, avg=2536.22, stdev=64.10, samples=9 00:34:35.858 lat (usec) : 750=0.06%, 1000=0.06% 00:34:35.858 lat (msec) : 2=0.82%, 4=94.39%, 10=4.66% 00:34:35.858 cpu : usr=95.48%, sys=3.44%, ctx=99, majf=0, minf=9 00:34:35.858 IO depths : 1=0.1%, 2=9.6%, 4=62.2%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.858 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.858 issued rwts: total=12651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.858 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.858 filename1: (groupid=0, jobs=1): err= 0: pid=1620876: Thu Nov 28 08:33:17 2024 00:34:35.858 read: IOPS=2612, BW=20.4MiB/s (21.4MB/s)(102MiB/5002msec) 00:34:35.858 slat (nsec): min=6248, max=60087, avg=13268.67, stdev=7697.42 00:34:35.858 clat (usec): min=609, max=5793, avg=3019.21, stdev=450.18 00:34:35.858 lat (usec): min=621, max=5807, avg=3032.48, stdev=450.62 00:34:35.858 clat percentiles (usec): 00:34:35.858 | 1.00th=[ 1893], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2737], 00:34:35.858 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:34:35.858 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3458], 95.00th=[ 3720], 00:34:35.858 | 99.00th=[ 4490], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5276], 00:34:35.858 | 99.99th=[ 5800] 00:34:35.858 bw ( KiB/s): min=19424, max=22240, per=25.32%, avg=20891.67, stdev=879.98, samples=9 00:34:35.858 iops : min= 2428, max= 2780, avg=2611.44, stdev=110.00, samples=9 00:34:35.858 lat (usec) : 750=0.03%, 1000=0.02% 00:34:35.858 lat (msec) : 2=1.34%, 4=95.70%, 10=2.92% 00:34:35.858 cpu : usr=92.20%, sys=5.00%, ctx=219, majf=0, minf=9 00:34:35.858 IO depths : 1=0.3%, 2=10.6%, 4=60.8%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.858 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.858 issued rwts: total=13068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.858 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.858 00:34:35.858 Run status group 0 (all jobs): 00:34:35.858 READ: bw=80.6MiB/s (84.5MB/s), 19.6MiB/s-20.8MiB/s (20.6MB/s-21.8MB/s), io=403MiB (423MB), run=5001-5003msec 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.858 00:34:35.858 real 0m24.331s 00:34:35.858 user 4m50.835s 00:34:35.858 sys 0m5.032s 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.858 08:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.858 ************************************ 00:34:35.858 END TEST fio_dif_rand_params 00:34:35.858 ************************************ 00:34:35.858 08:33:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:35.858 08:33:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:35.858 08:33:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.858 08:33:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:35.858 ************************************ 00:34:35.858 START TEST fio_dif_digest 00:34:35.858 ************************************ 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.858 bdev_null0 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.858 [2024-11-28 08:33:17.909070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.858 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:35.859 { 00:34:35.859 "params": { 00:34:35.859 "name": "Nvme$subsystem", 00:34:35.859 "trtype": "$TEST_TRANSPORT", 00:34:35.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:35.859 "adrfam": "ipv4", 00:34:35.859 "trsvcid": "$NVMF_PORT", 00:34:35.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:35.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:35.859 "hdgst": ${hdgst:-false}, 00:34:35.859 "ddgst": ${ddgst:-false} 00:34:35.859 }, 00:34:35.859 "method": "bdev_nvme_attach_controller" 00:34:35.859 } 00:34:35.859 EOF 00:34:35.859 )") 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:35.859 "params": { 00:34:35.859 "name": "Nvme0", 00:34:35.859 "trtype": "tcp", 00:34:35.859 "traddr": "10.0.0.2", 00:34:35.859 "adrfam": "ipv4", 00:34:35.859 "trsvcid": "4420", 00:34:35.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:35.859 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:35.859 "hdgst": true, 00:34:35.859 "ddgst": true 00:34:35.859 }, 00:34:35.859 "method": "bdev_nvme_attach_controller" 00:34:35.859 }' 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:35.859 08:33:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.118 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:36.118 ... 00:34:36.118 fio-3.35 00:34:36.118 Starting 3 threads 00:34:48.329 00:34:48.329 filename0: (groupid=0, jobs=1): err= 0: pid=1621978: Thu Nov 28 08:33:28 2024 00:34:48.329 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(331MiB/10046msec) 00:34:48.329 slat (nsec): min=6632, max=25860, avg=12196.49, stdev=1739.83 00:34:48.329 clat (usec): min=8780, max=48326, avg=11350.95, stdev=1238.57 00:34:48.329 lat (usec): min=8792, max=48339, avg=11363.14, stdev=1238.55 00:34:48.329 clat percentiles (usec): 00:34:48.329 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:34:48.329 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:34:48.329 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:34:48.329 | 99.00th=[13304], 99.50th=[13435], 99.90th=[14484], 99.95th=[46400], 00:34:48.329 | 99.99th=[48497] 00:34:48.329 bw ( KiB/s): min=33280, max=34304, per=33.16%, avg=33868.80, stdev=311.88, samples=20 00:34:48.329 iops : min= 260, max= 268, avg=264.60, stdev= 2.44, samples=20 00:34:48.329 lat (msec) : 10=4.46%, 20=95.47%, 50=0.08% 00:34:48.329 cpu : usr=94.02%, sys=5.67%, ctx=25, majf=0, minf=59 00:34:48.329 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.329 issued rwts: total=2648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.329 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:48.329 filename0: (groupid=0, jobs=1): err= 0: pid=1621979: Thu Nov 28 08:33:28 2024 00:34:48.329 read: IOPS=274, BW=34.4MiB/s (36.0MB/s)(345MiB/10047msec) 00:34:48.329 slat (nsec): min=6642, max=46083, avg=12105.02, stdev=1980.83 00:34:48.329 clat (usec): min=7810, max=52516, avg=10886.56, stdev=1298.67 00:34:48.329 lat (usec): min=7823, max=52528, avg=10898.67, stdev=1298.61 00:34:48.329 clat percentiles (usec): 00:34:48.329 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:34:48.329 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:34:48.329 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:34:48.329 | 99.00th=[12780], 99.50th=[13173], 99.90th=[14222], 99.95th=[47973], 00:34:48.329 | 99.99th=[52691] 00:34:48.329 bw ( KiB/s): min=34048, max=36096, per=34.58%, avg=35315.20, stdev=458.51, samples=20 00:34:48.329 iops : min= 266, max= 282, avg=275.90, stdev= 3.58, samples=20 00:34:48.329 lat (msec) : 10=11.99%, 20=87.94%, 50=0.04%, 100=0.04% 00:34:48.329 cpu : usr=93.72%, sys=5.98%, ctx=29, majf=0, minf=88 00:34:48.329 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.329 issued rwts: total=2761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.329 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:48.329 filename0: (groupid=0, jobs=1): err= 0: pid=1621980: Thu Nov 28 08:33:28 2024 00:34:48.329 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(326MiB/10045msec) 00:34:48.329 slat (nsec): min=6557, max=25523, avg=11979.19, stdev=1965.11 00:34:48.329 clat (usec): min=8904, max=49987, avg=11529.02, stdev=1275.97 00:34:48.329 lat (usec): min=8916, max=50000, avg=11541.00, stdev=1276.00 00:34:48.329 clat percentiles (usec): 00:34:48.329 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:34:48.329 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:34:48.329 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:34:48.329 | 99.00th=[13566], 99.50th=[13960], 99.90th=[15401], 99.95th=[45351], 00:34:48.329 | 99.99th=[50070] 00:34:48.329 bw ( KiB/s): min=32256, max=34048, per=32.65%, avg=33344.00, stdev=453.98, samples=20 00:34:48.329 iops : min= 252, max= 266, avg=260.50, stdev= 3.55, samples=20 00:34:48.329 lat (msec) : 10=2.42%, 20=97.51%, 50=0.08% 00:34:48.329 cpu : usr=94.37%, sys=5.33%, ctx=27, majf=0, minf=91 00:34:48.329 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.329 issued rwts: total=2607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.329 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:48.329 00:34:48.329 Run status group 0 (all jobs): 00:34:48.329 READ: bw=99.7MiB/s (105MB/s), 32.4MiB/s-34.4MiB/s (34.0MB/s-36.0MB/s), io=1002MiB (1051MB), run=10045-10047msec 00:34:48.329 08:33:29 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:48.329 08:33:29 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:48.329 08:33:29 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.329 08:33:29 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:48.329 08:33:29 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:48.329 08:33:29 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:48.329 08:33:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.329 08:33:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.329 08:33:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.329 08:33:29 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:48.329 08:33:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.330 08:33:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.330 08:33:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.330 00:34:48.330 real 0m11.218s 00:34:48.330 user 0m34.954s 00:34:48.330 sys 0m1.983s 00:34:48.330 08:33:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:48.330 08:33:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.330 ************************************ 00:34:48.330 END TEST fio_dif_digest 00:34:48.330 ************************************ 00:34:48.330 08:33:29 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:48.330 08:33:29 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:48.330 rmmod nvme_tcp 00:34:48.330 rmmod nvme_fabrics 00:34:48.330 rmmod nvme_keyring 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1612995 ']' 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1612995 00:34:48.330 08:33:29 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1612995 ']' 00:34:48.330 08:33:29 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1612995 00:34:48.330 08:33:29 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:48.330 08:33:29 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:48.330 08:33:29 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1612995 00:34:48.330 08:33:29 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:48.330 08:33:29 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:48.330 08:33:29 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1612995' 00:34:48.330 killing process with pid 1612995 00:34:48.330 08:33:29 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1612995 00:34:48.330 08:33:29 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1612995 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:48.330 08:33:29 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:49.268 Waiting for block devices as requested 00:34:49.269 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:49.528 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:49.528 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:49.528 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:49.788 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:49.788 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:49.788 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:49.788 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:50.048 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:50.048 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:50.048 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:50.048 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:50.308 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:50.308 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:50.308 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:50.308 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:50.567 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:50.567 08:33:32 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:50.567 08:33:32 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:50.567 08:33:32 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:50.567 08:33:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:50.567 08:33:32 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:50.567 08:33:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:50.567 08:33:32 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:50.567 08:33:32 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:50.567 08:33:32 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.567 08:33:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:50.567 08:33:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.101 08:33:34 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.101 00:34:53.101 real 1m11.750s 00:34:53.101 user 7m6.342s 00:34:53.101 sys 0m18.988s 00:34:53.101 08:33:34 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.101 08:33:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:53.101 ************************************ 00:34:53.101 END TEST nvmf_dif 00:34:53.101 ************************************ 00:34:53.101 08:33:34 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:53.101 08:33:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:53.101 08:33:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.101 08:33:34 -- common/autotest_common.sh@10 -- # set +x 00:34:53.101 ************************************ 00:34:53.101 START TEST nvmf_abort_qd_sizes 00:34:53.101 ************************************ 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:53.101 * Looking for test storage... 00:34:53.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:53.101 08:33:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:53.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.101 --rc genhtml_branch_coverage=1 00:34:53.101 --rc genhtml_function_coverage=1 00:34:53.101 --rc genhtml_legend=1 00:34:53.101 --rc geninfo_all_blocks=1 00:34:53.101 --rc geninfo_unexecuted_blocks=1 00:34:53.101 00:34:53.101 ' 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:53.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.101 --rc genhtml_branch_coverage=1 00:34:53.101 --rc genhtml_function_coverage=1 00:34:53.101 --rc genhtml_legend=1 00:34:53.101 --rc geninfo_all_blocks=1 00:34:53.101 --rc geninfo_unexecuted_blocks=1 00:34:53.101 00:34:53.101 ' 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:53.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.101 --rc genhtml_branch_coverage=1 00:34:53.101 --rc genhtml_function_coverage=1 00:34:53.101 --rc genhtml_legend=1 00:34:53.101 --rc geninfo_all_blocks=1 00:34:53.101 --rc geninfo_unexecuted_blocks=1 00:34:53.101 00:34:53.101 ' 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:53.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.101 --rc genhtml_branch_coverage=1 00:34:53.101 --rc genhtml_function_coverage=1 00:34:53.101 --rc genhtml_legend=1 00:34:53.101 --rc geninfo_all_blocks=1 00:34:53.101 --rc geninfo_unexecuted_blocks=1 00:34:53.101 00:34:53.101 ' 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:53.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.101 08:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:58.375 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:58.375 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:58.375 Found net devices under 0000:86:00.0: cvl_0_0 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:58.375 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:58.376 Found net devices under 0000:86:00.1: cvl_0_1 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:58.376 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:58.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:58.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:34:58.635 00:34:58.635 --- 10.0.0.2 ping statistics --- 00:34:58.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.635 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:58.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:58.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:34:58.635 00:34:58.635 --- 10.0.0.1 ping statistics --- 00:34:58.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.635 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:58.635 08:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:01.171 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:01.171 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:01.739 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1629720 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1629720 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1629720 ']' 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:01.998 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:01.998 [2024-11-28 08:33:44.134341] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:35:01.998 [2024-11-28 08:33:44.134388] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.998 [2024-11-28 08:33:44.201580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:01.998 [2024-11-28 08:33:44.245360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.998 [2024-11-28 08:33:44.245398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.998 [2024-11-28 08:33:44.245405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.998 [2024-11-28 08:33:44.245411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.998 [2024-11-28 08:33:44.245416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.998 [2024-11-28 08:33:44.247019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.998 [2024-11-28 08:33:44.247113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:01.998 [2024-11-28 08:33:44.247203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:01.998 [2024-11-28 08:33:44.247204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:02.257 08:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:02.257 ************************************ 00:35:02.257 START TEST spdk_target_abort 00:35:02.257 ************************************ 00:35:02.257 08:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:02.257 08:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:02.257 08:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:02.257 08:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.257 08:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.541 spdk_targetn1 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.541 [2024-11-28 08:33:47.253658] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.541 [2024-11-28 08:33:47.306140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:05.541 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:05.542 08:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:08.825 Initializing NVMe Controllers 00:35:08.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:08.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:08.825 Initialization complete. Launching workers. 00:35:08.825 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17298, failed: 0 00:35:08.825 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1353, failed to submit 15945 00:35:08.825 success 747, unsuccessful 606, failed 0 00:35:08.825 08:33:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:08.825 08:33:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.112 Initializing NVMe Controllers 00:35:12.112 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:12.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:12.112 Initialization complete. Launching workers. 00:35:12.112 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8430, failed: 0 00:35:12.112 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7183 00:35:12.112 success 335, unsuccessful 912, failed 0 00:35:12.112 08:33:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:12.112 08:33:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:15.398 Initializing NVMe Controllers 00:35:15.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:15.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:15.398 Initialization complete. Launching workers. 00:35:15.398 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37860, failed: 0 00:35:15.398 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2756, failed to submit 35104 00:35:15.398 success 560, unsuccessful 2196, failed 0 00:35:15.398 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:15.398 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.398 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:15.398 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.398 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:15.398 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.398 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1629720 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1629720 ']' 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1629720 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1629720 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1629720' 00:35:16.333 killing process with pid 1629720 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1629720 00:35:16.333 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1629720 00:35:16.592 00:35:16.592 real 0m14.276s 00:35:16.592 user 0m54.377s 00:35:16.592 sys 0m2.619s 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:16.592 ************************************ 00:35:16.592 END TEST spdk_target_abort 00:35:16.592 ************************************ 00:35:16.592 08:33:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:16.592 08:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:16.592 08:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:16.592 08:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:16.592 ************************************ 00:35:16.592 START TEST kernel_target_abort 00:35:16.592 ************************************ 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:16.592 08:33:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:19.127 Waiting for block devices as requested 00:35:19.127 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:19.127 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:19.386 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:19.386 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:19.386 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:19.386 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:19.645 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:19.645 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:19.645 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:19.645 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:19.905 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:19.905 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:19.905 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:20.164 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:20.164 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:20.164 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:20.164 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:20.423 No valid GPT data, bailing 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:20.423 00:35:20.423 Discovery Log Number of Records 2, Generation counter 2 00:35:20.423 =====Discovery Log Entry 0====== 00:35:20.423 trtype: tcp 00:35:20.423 adrfam: ipv4 00:35:20.423 subtype: current discovery subsystem 00:35:20.423 treq: not specified, sq flow control disable supported 00:35:20.423 portid: 1 00:35:20.423 trsvcid: 4420 00:35:20.423 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:20.423 traddr: 10.0.0.1 00:35:20.423 eflags: none 00:35:20.423 sectype: none 00:35:20.423 =====Discovery Log Entry 1====== 00:35:20.423 trtype: tcp 00:35:20.423 adrfam: ipv4 00:35:20.423 subtype: nvme subsystem 00:35:20.423 treq: not specified, sq flow control disable supported 00:35:20.423 portid: 1 00:35:20.423 trsvcid: 4420 00:35:20.423 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:20.423 traddr: 10.0.0.1 00:35:20.423 eflags: none 00:35:20.423 sectype: none 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:20.423 08:34:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:23.719 Initializing NVMe Controllers 00:35:23.719 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:23.719 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:23.719 Initialization complete. Launching workers. 00:35:23.719 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90185, failed: 0 00:35:23.719 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 90185, failed to submit 0 00:35:23.719 success 0, unsuccessful 90185, failed 0 00:35:23.719 08:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:23.719 08:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:27.006 Initializing NVMe Controllers 00:35:27.006 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:27.006 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:27.006 Initialization complete. Launching workers. 00:35:27.006 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 143041, failed: 0 00:35:27.006 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35886, failed to submit 107155 00:35:27.006 success 0, unsuccessful 35886, failed 0 00:35:27.006 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:27.006 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:30.293 Initializing NVMe Controllers 00:35:30.293 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:30.293 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:30.293 Initialization complete. Launching workers. 00:35:30.293 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 132958, failed: 0 00:35:30.293 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33278, failed to submit 99680 00:35:30.293 success 0, unsuccessful 33278, failed 0 00:35:30.293 08:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:30.293 08:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:30.293 08:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:30.293 08:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:30.293 08:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:30.293 08:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:30.293 08:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:30.293 08:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:30.293 08:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:30.293 08:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:32.829 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:32.829 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:33.397 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:33.397 00:35:33.397 real 0m16.823s 00:35:33.397 user 0m8.805s 00:35:33.397 sys 0m4.551s 00:35:33.397 08:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:33.397 08:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:33.397 ************************************ 00:35:33.397 END TEST kernel_target_abort 00:35:33.397 ************************************ 00:35:33.397 08:34:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:33.397 08:34:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:33.397 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:33.397 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:33.397 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:33.397 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:33.397 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:33.397 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:33.397 rmmod nvme_tcp 00:35:33.397 rmmod nvme_fabrics 00:35:33.656 rmmod nvme_keyring 00:35:33.656 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:33.656 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:33.656 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:33.656 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1629720 ']' 00:35:33.656 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1629720 00:35:33.656 08:34:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1629720 ']' 00:35:33.656 08:34:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1629720 00:35:33.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1629720) - No such process 00:35:33.656 08:34:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1629720 is not found' 00:35:33.656 Process with pid 1629720 is not found 00:35:33.656 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:33.656 08:34:15 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:36.194 Waiting for block devices as requested 00:35:36.194 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:36.194 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:36.194 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:36.194 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:36.194 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:36.194 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:36.454 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:36.454 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:36.454 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:36.454 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:36.713 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:36.713 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:36.713 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:36.973 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:36.973 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:36.973 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:36.973 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:37.233 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:37.233 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:37.233 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:37.233 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:37.233 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:37.233 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:37.233 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:37.233 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:37.233 08:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.233 08:34:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:37.233 08:34:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.139 08:34:21 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:39.139 00:35:39.139 real 0m46.535s 00:35:39.139 user 1m6.979s 00:35:39.139 sys 0m15.168s 00:35:39.139 08:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:39.139 08:34:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:39.139 ************************************ 00:35:39.139 END TEST nvmf_abort_qd_sizes 00:35:39.139 ************************************ 00:35:39.398 08:34:21 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:39.398 08:34:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:39.398 08:34:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:39.398 08:34:21 -- common/autotest_common.sh@10 -- # set +x 00:35:39.398 ************************************ 00:35:39.398 START TEST keyring_file 00:35:39.398 ************************************ 00:35:39.398 08:34:21 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:39.398 * Looking for test storage... 00:35:39.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:39.398 08:34:21 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:39.398 08:34:21 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:39.398 08:34:21 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:39.398 08:34:21 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:39.398 08:34:21 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:39.398 08:34:21 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:39.398 08:34:21 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:39.398 08:34:21 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:39.398 08:34:21 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:39.398 08:34:21 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:39.398 08:34:21 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:39.399 08:34:21 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:39.399 08:34:21 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:39.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.399 --rc genhtml_branch_coverage=1 00:35:39.399 --rc genhtml_function_coverage=1 00:35:39.399 --rc genhtml_legend=1 00:35:39.399 --rc geninfo_all_blocks=1 00:35:39.399 --rc geninfo_unexecuted_blocks=1 00:35:39.399 00:35:39.399 ' 00:35:39.399 08:34:21 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:39.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.399 --rc genhtml_branch_coverage=1 00:35:39.399 --rc genhtml_function_coverage=1 00:35:39.399 --rc genhtml_legend=1 00:35:39.399 --rc geninfo_all_blocks=1 00:35:39.399 --rc geninfo_unexecuted_blocks=1 00:35:39.399 00:35:39.399 ' 00:35:39.399 08:34:21 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:39.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.399 --rc genhtml_branch_coverage=1 00:35:39.399 --rc genhtml_function_coverage=1 00:35:39.399 --rc genhtml_legend=1 00:35:39.399 --rc geninfo_all_blocks=1 00:35:39.399 --rc geninfo_unexecuted_blocks=1 00:35:39.399 00:35:39.399 ' 00:35:39.399 08:34:21 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:39.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.399 --rc genhtml_branch_coverage=1 00:35:39.399 --rc genhtml_function_coverage=1 00:35:39.399 --rc genhtml_legend=1 00:35:39.399 --rc geninfo_all_blocks=1 00:35:39.399 --rc geninfo_unexecuted_blocks=1 00:35:39.399 00:35:39.399 ' 00:35:39.399 08:34:21 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:39.399 08:34:21 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:39.399 08:34:21 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:39.399 08:34:21 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.399 08:34:21 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.399 08:34:21 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.399 08:34:21 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:39.399 08:34:21 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:39.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:39.399 08:34:21 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:39.399 08:34:21 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:39.399 08:34:21 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:39.399 08:34:21 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:39.399 08:34:21 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:39.399 08:34:21 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:39.399 08:34:21 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:39.399 08:34:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:39.399 08:34:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:39.399 08:34:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:39.399 08:34:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:39.399 08:34:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:39.399 08:34:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ut7AX6bDt1 00:35:39.399 08:34:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:39.399 08:34:21 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:39.658 08:34:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ut7AX6bDt1 00:35:39.658 08:34:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ut7AX6bDt1 00:35:39.658 08:34:21 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ut7AX6bDt1 00:35:39.658 08:34:21 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:39.658 08:34:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:39.658 08:34:21 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:39.658 08:34:21 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:39.658 08:34:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:39.658 08:34:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:39.658 08:34:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.J46ZQ6aYUX 00:35:39.658 08:34:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:39.658 08:34:21 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:39.658 08:34:21 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:39.658 08:34:21 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:39.658 08:34:21 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:39.659 08:34:21 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:39.659 08:34:21 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:39.659 08:34:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.J46ZQ6aYUX 00:35:39.659 08:34:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.J46ZQ6aYUX 00:35:39.659 08:34:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.J46ZQ6aYUX 00:35:39.659 08:34:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:39.659 08:34:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=1638319 00:35:39.659 08:34:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1638319 00:35:39.659 08:34:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1638319 ']' 00:35:39.659 08:34:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.659 08:34:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.659 08:34:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.659 08:34:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.659 08:34:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:39.659 [2024-11-28 08:34:21.795850] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:35:39.659 [2024-11-28 08:34:21.795900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638319 ] 00:35:39.659 [2024-11-28 08:34:21.856793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.659 [2024-11-28 08:34:21.900340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:39.917 08:34:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:39.917 [2024-11-28 08:34:22.122344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.917 null0 00:35:39.917 [2024-11-28 08:34:22.154406] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:39.917 [2024-11-28 08:34:22.154749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.917 08:34:22 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.917 08:34:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:40.176 [2024-11-28 08:34:22.186477] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:40.176 request: 00:35:40.176 { 00:35:40.176 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.176 "secure_channel": false, 00:35:40.176 "listen_address": { 00:35:40.176 "trtype": "tcp", 00:35:40.176 "traddr": "127.0.0.1", 00:35:40.176 "trsvcid": "4420" 00:35:40.176 }, 00:35:40.176 "method": "nvmf_subsystem_add_listener", 00:35:40.176 "req_id": 1 00:35:40.176 } 00:35:40.176 Got JSON-RPC error response 00:35:40.176 response: 00:35:40.176 { 00:35:40.176 "code": -32602, 00:35:40.176 "message": "Invalid parameters" 00:35:40.176 } 00:35:40.176 08:34:22 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:40.176 08:34:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:40.176 08:34:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:40.176 08:34:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:40.176 08:34:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:40.176 08:34:22 keyring_file -- keyring/file.sh@47 -- # bperfpid=1638323 00:35:40.176 08:34:22 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:40.176 08:34:22 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1638323 /var/tmp/bperf.sock 00:35:40.176 08:34:22 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1638323 ']' 00:35:40.176 08:34:22 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:40.176 08:34:22 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:40.176 08:34:22 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:40.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:40.176 08:34:22 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:40.176 08:34:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:40.176 [2024-11-28 08:34:22.239555] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:35:40.177 [2024-11-28 08:34:22.239599] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638323 ] 00:35:40.177 [2024-11-28 08:34:22.301143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.177 [2024-11-28 08:34:22.344117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.177 08:34:22 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:40.177 08:34:22 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:40.177 08:34:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ut7AX6bDt1 00:35:40.177 08:34:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ut7AX6bDt1 00:35:40.436 08:34:22 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.J46ZQ6aYUX 00:35:40.436 08:34:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.J46ZQ6aYUX 00:35:40.695 08:34:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:40.695 08:34:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:40.695 08:34:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:40.695 08:34:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.695 08:34:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:40.953 08:34:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ut7AX6bDt1 == \/\t\m\p\/\t\m\p\.\u\t\7\A\X\6\b\D\t\1 ]] 00:35:40.953 08:34:23 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:40.953 08:34:23 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:40.953 08:34:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:40.953 08:34:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:40.953 08:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.953 08:34:23 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.J46ZQ6aYUX == \/\t\m\p\/\t\m\p\.\J\4\6\Z\Q\6\a\Y\U\X ]] 00:35:40.953 08:34:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:40.953 08:34:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:40.953 08:34:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:40.953 08:34:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:40.953 08:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.953 08:34:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.212 08:34:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:41.212 08:34:23 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:41.212 08:34:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:41.212 08:34:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.212 08:34:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.212 08:34:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:41.212 08:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.471 08:34:23 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:41.471 08:34:23 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:41.471 08:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:41.729 [2024-11-28 08:34:23.767857] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:41.729 nvme0n1 00:35:41.729 08:34:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:41.729 08:34:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:41.730 08:34:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.730 08:34:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.730 08:34:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.730 08:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.988 08:34:24 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:41.988 08:34:24 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:41.988 08:34:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:41.988 08:34:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.988 08:34:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.988 08:34:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:41.988 08:34:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.246 08:34:24 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:42.246 08:34:24 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:42.246 Running I/O for 1 seconds... 00:35:43.181 17684.00 IOPS, 69.08 MiB/s 00:35:43.181 Latency(us) 00:35:43.181 [2024-11-28T07:34:25.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.181 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:43.181 nvme0n1 : 1.00 17733.26 69.27 0.00 0.00 7205.26 3162.82 14474.91 00:35:43.181 [2024-11-28T07:34:25.450Z] =================================================================================================================== 00:35:43.181 [2024-11-28T07:34:25.450Z] Total : 17733.26 69.27 0.00 0.00 7205.26 3162.82 14474.91 00:35:43.181 { 00:35:43.181 "results": [ 00:35:43.181 { 00:35:43.181 "job": "nvme0n1", 00:35:43.181 "core_mask": "0x2", 00:35:43.181 "workload": "randrw", 00:35:43.181 "percentage": 50, 00:35:43.181 "status": "finished", 00:35:43.181 "queue_depth": 128, 00:35:43.181 "io_size": 4096, 00:35:43.181 "runtime": 1.004553, 00:35:43.181 "iops": 17733.26046510239, 00:35:43.181 "mibps": 69.27054869180621, 00:35:43.181 "io_failed": 0, 00:35:43.181 "io_timeout": 0, 00:35:43.181 "avg_latency_us": 7205.2571029136825, 00:35:43.181 "min_latency_us": 3162.824347826087, 00:35:43.181 "max_latency_us": 14474.907826086957 00:35:43.181 } 00:35:43.181 ], 00:35:43.181 "core_count": 1 00:35:43.181 } 00:35:43.181 08:34:25 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:43.181 08:34:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:43.439 08:34:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:43.439 08:34:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:43.439 08:34:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:43.439 08:34:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:43.439 08:34:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:43.439 08:34:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:43.696 08:34:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:43.696 08:34:25 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:43.696 08:34:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:43.696 08:34:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:43.696 08:34:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:43.696 08:34:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:43.696 08:34:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:43.954 08:34:25 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:43.954 08:34:25 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:43.954 08:34:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:43.954 08:34:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:43.954 08:34:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:43.954 08:34:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:43.954 08:34:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:43.954 08:34:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:43.954 08:34:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:43.954 08:34:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:43.954 [2024-11-28 08:34:26.167797] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:43.954 [2024-11-28 08:34:26.168530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2342c80 (107): Transport endpoint is not connected 00:35:43.954 [2024-11-28 08:34:26.169524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2342c80 (9): Bad file descriptor 00:35:43.954 [2024-11-28 08:34:26.170526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:43.954 [2024-11-28 08:34:26.170536] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:43.954 [2024-11-28 08:34:26.170548] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:43.954 [2024-11-28 08:34:26.170557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:43.954 request: 00:35:43.954 { 00:35:43.954 "name": "nvme0", 00:35:43.954 "trtype": "tcp", 00:35:43.954 "traddr": "127.0.0.1", 00:35:43.954 "adrfam": "ipv4", 00:35:43.954 "trsvcid": "4420", 00:35:43.954 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.954 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:43.954 "prchk_reftag": false, 00:35:43.954 "prchk_guard": false, 00:35:43.954 "hdgst": false, 00:35:43.954 "ddgst": false, 00:35:43.954 "psk": "key1", 00:35:43.954 "allow_unrecognized_csi": false, 00:35:43.955 "method": "bdev_nvme_attach_controller", 00:35:43.955 "req_id": 1 00:35:43.955 } 00:35:43.955 Got JSON-RPC error response 00:35:43.955 response: 00:35:43.955 { 00:35:43.955 "code": -5, 00:35:43.955 "message": "Input/output error" 00:35:43.955 } 00:35:43.955 08:34:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:43.955 08:34:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:43.955 08:34:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:43.955 08:34:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:43.955 08:34:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:43.955 08:34:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:43.955 08:34:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:43.955 08:34:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:43.955 08:34:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:43.955 08:34:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.213 08:34:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:44.213 08:34:26 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:44.213 08:34:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:44.213 08:34:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.213 08:34:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.213 08:34:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:44.213 08:34:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.471 08:34:26 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:44.471 08:34:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:44.471 08:34:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:44.729 08:34:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:44.729 08:34:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:44.729 08:34:26 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:44.729 08:34:26 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:44.729 08:34:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.988 08:34:27 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:44.988 08:34:27 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.ut7AX6bDt1 00:35:44.988 08:34:27 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ut7AX6bDt1 00:35:44.988 08:34:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:44.988 08:34:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ut7AX6bDt1 00:35:44.988 08:34:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:44.988 08:34:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:44.988 08:34:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:44.988 08:34:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:44.988 08:34:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ut7AX6bDt1 00:35:44.988 08:34:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ut7AX6bDt1 00:35:45.246 [2024-11-28 08:34:27.356718] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ut7AX6bDt1': 0100660 00:35:45.246 [2024-11-28 08:34:27.356745] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:45.246 request: 00:35:45.246 { 00:35:45.246 "name": "key0", 00:35:45.246 "path": "/tmp/tmp.ut7AX6bDt1", 00:35:45.246 "method": "keyring_file_add_key", 00:35:45.246 "req_id": 1 00:35:45.246 } 00:35:45.246 Got JSON-RPC error response 00:35:45.246 response: 00:35:45.246 { 00:35:45.246 "code": -1, 00:35:45.246 "message": "Operation not permitted" 00:35:45.246 } 00:35:45.246 08:34:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:45.246 08:34:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:45.246 08:34:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:45.246 08:34:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:45.246 08:34:27 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.ut7AX6bDt1 00:35:45.246 08:34:27 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ut7AX6bDt1 00:35:45.246 08:34:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ut7AX6bDt1 00:35:45.505 08:34:27 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.ut7AX6bDt1 00:35:45.505 08:34:27 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:45.505 08:34:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:45.505 08:34:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.505 08:34:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.505 08:34:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:45.505 08:34:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.505 08:34:27 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:45.505 08:34:27 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.505 08:34:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:45.505 08:34:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.505 08:34:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:45.505 08:34:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:45.505 08:34:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:45.764 08:34:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:45.764 08:34:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.764 08:34:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.764 [2024-11-28 08:34:27.942289] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ut7AX6bDt1': No such file or directory 00:35:45.764 [2024-11-28 08:34:27.942308] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:45.764 [2024-11-28 08:34:27.942323] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:45.764 [2024-11-28 08:34:27.942331] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:45.764 [2024-11-28 08:34:27.942337] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:45.764 [2024-11-28 08:34:27.942344] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:45.764 request: 00:35:45.764 { 00:35:45.764 "name": "nvme0", 00:35:45.764 "trtype": "tcp", 00:35:45.764 "traddr": "127.0.0.1", 00:35:45.764 "adrfam": "ipv4", 00:35:45.764 "trsvcid": "4420", 00:35:45.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:45.764 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:45.764 "prchk_reftag": false, 00:35:45.764 "prchk_guard": false, 00:35:45.764 "hdgst": false, 00:35:45.764 "ddgst": false, 00:35:45.764 "psk": "key0", 00:35:45.764 "allow_unrecognized_csi": false, 00:35:45.764 "method": "bdev_nvme_attach_controller", 00:35:45.764 "req_id": 1 00:35:45.764 } 00:35:45.764 Got JSON-RPC error response 00:35:45.764 response: 00:35:45.764 { 00:35:45.764 "code": -19, 00:35:45.764 "message": "No such device" 00:35:45.764 } 00:35:45.764 08:34:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:45.764 08:34:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:45.764 08:34:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:45.764 08:34:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:45.764 08:34:27 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:45.764 08:34:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:46.023 08:34:28 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:46.023 08:34:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:46.023 08:34:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:46.023 08:34:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:46.023 08:34:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:46.023 08:34:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:46.023 08:34:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lehqgmxJCP 00:35:46.023 08:34:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:46.023 08:34:28 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:46.023 08:34:28 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:46.023 08:34:28 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:46.023 08:34:28 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:46.023 08:34:28 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:46.023 08:34:28 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:46.023 08:34:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lehqgmxJCP 00:35:46.023 08:34:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lehqgmxJCP 00:35:46.023 08:34:28 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.lehqgmxJCP 00:35:46.023 08:34:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lehqgmxJCP 00:35:46.023 08:34:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lehqgmxJCP 00:35:46.281 08:34:28 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:46.281 08:34:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:46.541 nvme0n1 00:35:46.541 08:34:28 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:46.541 08:34:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:46.541 08:34:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.541 08:34:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.541 08:34:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.541 08:34:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.799 08:34:28 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:46.799 08:34:28 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:46.799 08:34:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:46.799 08:34:29 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:46.799 08:34:29 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:46.799 08:34:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.799 08:34:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.799 08:34:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.058 08:34:29 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:47.058 08:34:29 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:47.058 08:34:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:47.058 08:34:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.058 08:34:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.058 08:34:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.058 08:34:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.316 08:34:29 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:47.317 08:34:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:47.317 08:34:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:47.575 08:34:29 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:47.575 08:34:29 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:47.575 08:34:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.575 08:34:29 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:47.575 08:34:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lehqgmxJCP 00:35:47.575 08:34:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lehqgmxJCP 00:35:47.834 08:34:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.J46ZQ6aYUX 00:35:47.834 08:34:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.J46ZQ6aYUX 00:35:48.093 08:34:30 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:48.093 08:34:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:48.352 nvme0n1 00:35:48.352 08:34:30 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:48.352 08:34:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:48.612 08:34:30 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:48.612 "subsystems": [ 00:35:48.612 { 00:35:48.612 "subsystem": "keyring", 00:35:48.612 "config": [ 00:35:48.612 { 00:35:48.612 "method": "keyring_file_add_key", 00:35:48.612 "params": { 00:35:48.612 "name": "key0", 00:35:48.612 "path": "/tmp/tmp.lehqgmxJCP" 00:35:48.612 } 00:35:48.612 }, 00:35:48.612 { 00:35:48.612 "method": "keyring_file_add_key", 00:35:48.612 "params": { 00:35:48.612 "name": "key1", 00:35:48.612 "path": "/tmp/tmp.J46ZQ6aYUX" 00:35:48.612 } 00:35:48.612 } 00:35:48.612 ] 00:35:48.612 }, 00:35:48.612 { 00:35:48.612 "subsystem": "iobuf", 00:35:48.612 "config": [ 00:35:48.612 { 00:35:48.612 "method": "iobuf_set_options", 00:35:48.612 "params": { 00:35:48.612 "small_pool_count": 8192, 00:35:48.612 "large_pool_count": 1024, 00:35:48.612 "small_bufsize": 8192, 00:35:48.612 "large_bufsize": 135168, 00:35:48.612 "enable_numa": false 00:35:48.612 } 00:35:48.612 } 00:35:48.612 ] 00:35:48.612 }, 00:35:48.612 { 00:35:48.612 "subsystem": "sock", 00:35:48.612 "config": [ 00:35:48.612 { 00:35:48.612 "method": "sock_set_default_impl", 00:35:48.612 "params": { 00:35:48.612 "impl_name": "posix" 00:35:48.612 } 00:35:48.612 }, 00:35:48.612 { 00:35:48.612 "method": "sock_impl_set_options", 00:35:48.612 "params": { 00:35:48.612 "impl_name": "ssl", 00:35:48.612 "recv_buf_size": 4096, 00:35:48.612 "send_buf_size": 4096, 00:35:48.612 "enable_recv_pipe": true, 00:35:48.612 "enable_quickack": false, 00:35:48.612 "enable_placement_id": 0, 00:35:48.612 "enable_zerocopy_send_server": true, 00:35:48.612 "enable_zerocopy_send_client": false, 00:35:48.612 "zerocopy_threshold": 0, 00:35:48.612 "tls_version": 0, 00:35:48.612 "enable_ktls": false 00:35:48.612 } 00:35:48.612 }, 00:35:48.612 { 00:35:48.612 "method": "sock_impl_set_options", 00:35:48.612 "params": { 00:35:48.612 "impl_name": "posix", 00:35:48.612 "recv_buf_size": 2097152, 00:35:48.612 "send_buf_size": 2097152, 00:35:48.612 "enable_recv_pipe": true, 00:35:48.612 "enable_quickack": false, 00:35:48.612 "enable_placement_id": 0, 00:35:48.612 "enable_zerocopy_send_server": true, 00:35:48.612 "enable_zerocopy_send_client": false, 00:35:48.612 "zerocopy_threshold": 0, 00:35:48.612 "tls_version": 0, 00:35:48.612 "enable_ktls": false 00:35:48.612 } 00:35:48.612 } 00:35:48.612 ] 00:35:48.612 }, 00:35:48.612 { 00:35:48.612 "subsystem": "vmd", 00:35:48.612 "config": [] 00:35:48.612 }, 00:35:48.612 { 00:35:48.612 "subsystem": "accel", 00:35:48.612 "config": [ 00:35:48.612 { 00:35:48.612 "method": "accel_set_options", 00:35:48.612 "params": { 00:35:48.612 "small_cache_size": 128, 00:35:48.612 "large_cache_size": 16, 00:35:48.612 "task_count": 2048, 00:35:48.612 "sequence_count": 2048, 00:35:48.612 "buf_count": 2048 00:35:48.612 } 00:35:48.612 } 00:35:48.612 ] 00:35:48.612 }, 00:35:48.612 { 00:35:48.612 "subsystem": "bdev", 00:35:48.612 "config": [ 00:35:48.612 { 00:35:48.612 "method": "bdev_set_options", 00:35:48.612 "params": { 00:35:48.612 "bdev_io_pool_size": 65535, 00:35:48.612 "bdev_io_cache_size": 256, 00:35:48.612 "bdev_auto_examine": true, 00:35:48.612 "iobuf_small_cache_size": 128, 00:35:48.612 "iobuf_large_cache_size": 16 00:35:48.612 } 00:35:48.612 }, 00:35:48.612 { 00:35:48.612 "method": "bdev_raid_set_options", 00:35:48.612 "params": { 00:35:48.612 "process_window_size_kb": 1024, 00:35:48.612 "process_max_bandwidth_mb_sec": 0 00:35:48.612 } 00:35:48.612 }, 00:35:48.612 { 00:35:48.613 "method": "bdev_iscsi_set_options", 00:35:48.613 "params": { 00:35:48.613 "timeout_sec": 30 00:35:48.613 } 00:35:48.613 }, 00:35:48.613 { 00:35:48.613 "method": "bdev_nvme_set_options", 00:35:48.613 "params": { 00:35:48.613 "action_on_timeout": "none", 00:35:48.613 "timeout_us": 0, 00:35:48.613 "timeout_admin_us": 0, 00:35:48.613 "keep_alive_timeout_ms": 10000, 00:35:48.613 "arbitration_burst": 0, 00:35:48.613 "low_priority_weight": 0, 00:35:48.613 "medium_priority_weight": 0, 00:35:48.613 "high_priority_weight": 0, 00:35:48.613 "nvme_adminq_poll_period_us": 10000, 00:35:48.613 "nvme_ioq_poll_period_us": 0, 00:35:48.613 "io_queue_requests": 512, 00:35:48.613 "delay_cmd_submit": true, 00:35:48.613 "transport_retry_count": 4, 00:35:48.613 "bdev_retry_count": 3, 00:35:48.613 "transport_ack_timeout": 0, 00:35:48.613 "ctrlr_loss_timeout_sec": 0, 00:35:48.613 "reconnect_delay_sec": 0, 00:35:48.613 "fast_io_fail_timeout_sec": 0, 00:35:48.613 "disable_auto_failback": false, 00:35:48.613 "generate_uuids": false, 00:35:48.613 "transport_tos": 0, 00:35:48.613 "nvme_error_stat": false, 00:35:48.613 "rdma_srq_size": 0, 00:35:48.613 "io_path_stat": false, 00:35:48.613 "allow_accel_sequence": false, 00:35:48.613 "rdma_max_cq_size": 0, 00:35:48.613 "rdma_cm_event_timeout_ms": 0, 00:35:48.613 "dhchap_digests": [ 00:35:48.613 "sha256", 00:35:48.613 "sha384", 00:35:48.613 "sha512" 00:35:48.613 ], 00:35:48.613 "dhchap_dhgroups": [ 00:35:48.613 "null", 00:35:48.613 "ffdhe2048", 00:35:48.613 "ffdhe3072", 00:35:48.613 "ffdhe4096", 00:35:48.613 "ffdhe6144", 00:35:48.613 "ffdhe8192" 00:35:48.613 ] 00:35:48.613 } 00:35:48.613 }, 00:35:48.613 { 00:35:48.613 "method": "bdev_nvme_attach_controller", 00:35:48.613 "params": { 00:35:48.613 "name": "nvme0", 00:35:48.613 "trtype": "TCP", 00:35:48.613 "adrfam": "IPv4", 00:35:48.613 "traddr": "127.0.0.1", 00:35:48.613 "trsvcid": "4420", 00:35:48.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.613 "prchk_reftag": false, 00:35:48.613 "prchk_guard": false, 00:35:48.613 "ctrlr_loss_timeout_sec": 0, 00:35:48.613 "reconnect_delay_sec": 0, 00:35:48.613 "fast_io_fail_timeout_sec": 0, 00:35:48.613 "psk": "key0", 00:35:48.613 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.613 "hdgst": false, 00:35:48.613 "ddgst": false, 00:35:48.613 "multipath": "multipath" 00:35:48.613 } 00:35:48.613 }, 00:35:48.613 { 00:35:48.613 "method": "bdev_nvme_set_hotplug", 00:35:48.613 "params": { 00:35:48.613 "period_us": 100000, 00:35:48.613 "enable": false 00:35:48.613 } 00:35:48.613 }, 00:35:48.613 { 00:35:48.613 "method": "bdev_wait_for_examine" 00:35:48.613 } 00:35:48.613 ] 00:35:48.613 }, 00:35:48.613 { 00:35:48.613 "subsystem": "nbd", 00:35:48.613 "config": [] 00:35:48.613 } 00:35:48.613 ] 00:35:48.613 }' 00:35:48.613 08:34:30 keyring_file -- keyring/file.sh@115 -- # killprocess 1638323 00:35:48.613 08:34:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1638323 ']' 00:35:48.613 08:34:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1638323 00:35:48.613 08:34:30 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:48.613 08:34:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:48.613 08:34:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1638323 00:35:48.613 08:34:30 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:48.613 08:34:30 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:48.613 08:34:30 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1638323' 00:35:48.613 killing process with pid 1638323 00:35:48.613 08:34:30 keyring_file -- common/autotest_common.sh@973 -- # kill 1638323 00:35:48.613 Received shutdown signal, test time was about 1.000000 seconds 00:35:48.613 00:35:48.613 Latency(us) 00:35:48.613 [2024-11-28T07:34:30.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.613 [2024-11-28T07:34:30.882Z] =================================================================================================================== 00:35:48.613 [2024-11-28T07:34:30.882Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:48.613 08:34:30 keyring_file -- common/autotest_common.sh@978 -- # wait 1638323 00:35:48.873 08:34:30 keyring_file -- keyring/file.sh@118 -- # bperfpid=1639842 00:35:48.874 08:34:30 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1639842 /var/tmp/bperf.sock 00:35:48.874 08:34:30 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1639842 ']' 00:35:48.874 08:34:30 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:48.874 08:34:30 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:48.874 08:34:30 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:48.874 08:34:30 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:48.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:48.874 08:34:30 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:48.874 "subsystems": [ 00:35:48.874 { 00:35:48.874 "subsystem": "keyring", 00:35:48.874 "config": [ 00:35:48.874 { 00:35:48.874 "method": "keyring_file_add_key", 00:35:48.874 "params": { 00:35:48.874 "name": "key0", 00:35:48.874 "path": "/tmp/tmp.lehqgmxJCP" 00:35:48.874 } 00:35:48.874 }, 00:35:48.874 { 00:35:48.874 "method": "keyring_file_add_key", 00:35:48.874 "params": { 00:35:48.874 "name": "key1", 00:35:48.874 "path": "/tmp/tmp.J46ZQ6aYUX" 00:35:48.874 } 00:35:48.874 } 00:35:48.874 ] 00:35:48.874 }, 00:35:48.874 { 00:35:48.874 "subsystem": "iobuf", 00:35:48.874 "config": [ 00:35:48.874 { 00:35:48.874 "method": "iobuf_set_options", 00:35:48.874 "params": { 00:35:48.874 "small_pool_count": 8192, 00:35:48.874 "large_pool_count": 1024, 00:35:48.874 "small_bufsize": 8192, 00:35:48.874 "large_bufsize": 135168, 00:35:48.874 "enable_numa": false 00:35:48.874 } 00:35:48.874 } 00:35:48.874 ] 00:35:48.874 }, 00:35:48.874 { 00:35:48.874 "subsystem": "sock", 00:35:48.874 "config": [ 00:35:48.874 { 00:35:48.874 "method": "sock_set_default_impl", 00:35:48.874 "params": { 00:35:48.874 "impl_name": "posix" 00:35:48.874 } 00:35:48.874 }, 00:35:48.874 { 00:35:48.874 "method": "sock_impl_set_options", 00:35:48.874 "params": { 00:35:48.874 "impl_name": "ssl", 00:35:48.874 "recv_buf_size": 4096, 00:35:48.874 "send_buf_size": 4096, 00:35:48.874 "enable_recv_pipe": true, 00:35:48.874 "enable_quickack": false, 00:35:48.874 "enable_placement_id": 0, 00:35:48.874 "enable_zerocopy_send_server": true, 00:35:48.874 "enable_zerocopy_send_client": false, 00:35:48.874 "zerocopy_threshold": 0, 00:35:48.874 "tls_version": 0, 00:35:48.874 "enable_ktls": false 00:35:48.874 } 00:35:48.874 }, 00:35:48.874 { 00:35:48.874 "method": "sock_impl_set_options", 00:35:48.874 "params": { 00:35:48.874 "impl_name": "posix", 00:35:48.874 "recv_buf_size": 2097152, 00:35:48.874 "send_buf_size": 2097152, 00:35:48.874 "enable_recv_pipe": true, 00:35:48.874 "enable_quickack": false, 00:35:48.874 "enable_placement_id": 0, 00:35:48.874 "enable_zerocopy_send_server": true, 00:35:48.874 "enable_zerocopy_send_client": false, 00:35:48.874 "zerocopy_threshold": 0, 00:35:48.874 "tls_version": 0, 00:35:48.874 "enable_ktls": false 00:35:48.874 } 00:35:48.874 } 00:35:48.874 ] 00:35:48.874 }, 00:35:48.874 { 00:35:48.874 "subsystem": "vmd", 00:35:48.874 "config": [] 00:35:48.874 }, 00:35:48.874 { 00:35:48.874 "subsystem": "accel", 00:35:48.874 "config": [ 00:35:48.874 { 00:35:48.874 "method": "accel_set_options", 00:35:48.874 "params": { 00:35:48.874 "small_cache_size": 128, 00:35:48.874 "large_cache_size": 16, 00:35:48.874 "task_count": 2048, 00:35:48.874 "sequence_count": 2048, 00:35:48.874 "buf_count": 2048 00:35:48.874 } 00:35:48.874 } 00:35:48.874 ] 00:35:48.874 }, 00:35:48.874 { 00:35:48.874 "subsystem": "bdev", 00:35:48.874 "config": [ 00:35:48.874 { 00:35:48.874 "method": "bdev_set_options", 00:35:48.874 "params": { 00:35:48.874 "bdev_io_pool_size": 65535, 00:35:48.874 "bdev_io_cache_size": 256, 00:35:48.874 "bdev_auto_examine": true, 00:35:48.874 "iobuf_small_cache_size": 128, 00:35:48.874 "iobuf_large_cache_size": 16 00:35:48.874 } 00:35:48.874 }, 00:35:48.874 { 00:35:48.874 "method": "bdev_raid_set_options", 00:35:48.874 "params": { 00:35:48.874 "process_window_size_kb": 1024, 00:35:48.874 "process_max_bandwidth_mb_sec": 0 00:35:48.874 } 00:35:48.874 }, 00:35:48.874 { 00:35:48.874 "method": "bdev_iscsi_set_options", 00:35:48.874 "params": { 00:35:48.874 "timeout_sec": 30 00:35:48.874 } 00:35:48.874 }, 00:35:48.874 { 00:35:48.874 "method": "bdev_nvme_set_options", 00:35:48.874 "params": { 00:35:48.874 "action_on_timeout": "none", 00:35:48.874 "timeout_us": 0, 00:35:48.874 "timeout_admin_us": 0, 00:35:48.874 "keep_alive_timeout_ms": 10000, 00:35:48.874 "arbitration_burst": 0, 00:35:48.874 "low_priority_weight": 0, 00:35:48.874 "medium_priority_weight": 0, 00:35:48.874 "high_priority_weight": 0, 00:35:48.874 "nvme_adminq_poll_period_us": 10000, 00:35:48.874 "nvme_ioq_poll_period_us": 0, 00:35:48.874 "io_queue_requests": 512, 00:35:48.874 "delay_cmd_submit": true, 00:35:48.874 "transport_retry_count": 4, 00:35:48.874 "bdev_retry_count": 3, 00:35:48.874 "transport_ack_timeout": 0, 00:35:48.874 "ctrlr_loss_timeout_sec": 0, 00:35:48.874 "reconnect_delay_sec": 0, 00:35:48.874 "fast_io_fail_timeout_sec": 0, 00:35:48.874 "disable_auto_failback": false, 00:35:48.874 "generate_uuids": false, 00:35:48.874 "transport_tos": 0, 00:35:48.874 "nvme_error_stat": false, 00:35:48.874 "rdma_srq_size": 0, 00:35:48.874 "io_path_stat": false, 00:35:48.874 "allow_accel_sequence": false, 00:35:48.874 "rdma_max_cq_size": 0, 00:35:48.874 "rdma_cm_event_timeout_ms": 0, 00:35:48.874 "dhchap_digests": [ 00:35:48.874 "sha256", 00:35:48.874 "sha384", 00:35:48.874 "sha512" 00:35:48.874 ], 00:35:48.874 "dhchap_dhgroups": [ 00:35:48.874 "null", 00:35:48.874 "ffdhe2048", 00:35:48.874 "ffdhe3072", 00:35:48.874 "ffdhe4096", 00:35:48.874 "ffdhe6144", 00:35:48.874 "ffdhe8192" 00:35:48.875 ] 00:35:48.875 } 00:35:48.875 }, 00:35:48.875 { 00:35:48.875 "method": "bdev_nvme_attach_controller", 00:35:48.875 "params": { 00:35:48.875 "name": "nvme0", 00:35:48.875 "trtype": "TCP", 00:35:48.875 "adrfam": "IPv4", 00:35:48.875 "traddr": "127.0.0.1", 00:35:48.875 "trsvcid": "4420", 00:35:48.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.875 "prchk_reftag": false, 00:35:48.875 "prchk_guard": false, 00:35:48.875 "ctrlr_loss_timeout_sec": 0, 00:35:48.875 "reconnect_delay_sec": 0, 00:35:48.875 "fast_io_fail_timeout_sec": 0, 00:35:48.875 "psk": "key0", 00:35:48.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.875 "hdgst": false, 00:35:48.875 "ddgst": false, 00:35:48.875 "multipath": "multipath" 00:35:48.875 } 00:35:48.875 }, 00:35:48.875 { 00:35:48.875 "method": "bdev_nvme_set_hotplug", 00:35:48.875 "params": { 00:35:48.875 "period_us": 100000, 00:35:48.875 "enable": false 00:35:48.875 } 00:35:48.875 }, 00:35:48.875 { 00:35:48.875 "method": "bdev_wait_for_examine" 00:35:48.875 } 00:35:48.875 ] 00:35:48.875 }, 00:35:48.875 { 00:35:48.875 "subsystem": "nbd", 00:35:48.875 "config": [] 00:35:48.875 } 00:35:48.875 ] 00:35:48.875 }' 00:35:48.875 08:34:30 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:48.875 08:34:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:48.875 [2024-11-28 08:34:30.972954] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:35:48.875 [2024-11-28 08:34:30.973005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639842 ] 00:35:48.875 [2024-11-28 08:34:31.035206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.875 [2024-11-28 08:34:31.078800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.134 [2024-11-28 08:34:31.240555] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:49.701 08:34:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:49.701 08:34:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:49.701 08:34:31 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:49.701 08:34:31 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:49.701 08:34:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.960 08:34:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:49.961 08:34:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:49.961 08:34:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:49.961 08:34:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.961 08:34:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.961 08:34:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.961 08:34:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:49.961 08:34:32 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:49.961 08:34:32 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:49.961 08:34:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:49.961 08:34:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.961 08:34:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.961 08:34:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:49.961 08:34:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.219 08:34:32 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:50.219 08:34:32 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:50.219 08:34:32 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:50.219 08:34:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:50.478 08:34:32 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:50.478 08:34:32 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:50.478 08:34:32 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.lehqgmxJCP /tmp/tmp.J46ZQ6aYUX 00:35:50.478 08:34:32 keyring_file -- keyring/file.sh@20 -- # killprocess 1639842 00:35:50.478 08:34:32 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1639842 ']' 00:35:50.478 08:34:32 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1639842 00:35:50.478 08:34:32 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:50.478 08:34:32 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.478 08:34:32 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1639842 00:35:50.478 08:34:32 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:50.478 08:34:32 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:50.478 08:34:32 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1639842' 00:35:50.478 killing process with pid 1639842 00:35:50.478 08:34:32 keyring_file -- common/autotest_common.sh@973 -- # kill 1639842 00:35:50.478 Received shutdown signal, test time was about 1.000000 seconds 00:35:50.478 00:35:50.478 Latency(us) 00:35:50.478 [2024-11-28T07:34:32.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.478 [2024-11-28T07:34:32.747Z] =================================================================================================================== 00:35:50.478 [2024-11-28T07:34:32.747Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:50.478 08:34:32 keyring_file -- common/autotest_common.sh@978 -- # wait 1639842 00:35:50.736 08:34:32 keyring_file -- keyring/file.sh@21 -- # killprocess 1638319 00:35:50.736 08:34:32 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1638319 ']' 00:35:50.736 08:34:32 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1638319 00:35:50.736 08:34:32 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:50.736 08:34:32 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.736 08:34:32 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1638319 00:35:50.736 08:34:32 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:50.736 08:34:32 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:50.736 08:34:32 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1638319' 00:35:50.736 killing process with pid 1638319 00:35:50.736 08:34:32 keyring_file -- common/autotest_common.sh@973 -- # kill 1638319 00:35:50.736 08:34:32 keyring_file -- common/autotest_common.sh@978 -- # wait 1638319 00:35:50.996 00:35:50.996 real 0m11.722s 00:35:50.996 user 0m29.115s 00:35:50.996 sys 0m2.653s 00:35:50.996 08:34:33 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:50.996 08:34:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:50.996 ************************************ 00:35:50.996 END TEST keyring_file 00:35:50.996 ************************************ 00:35:50.996 08:34:33 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:50.996 08:34:33 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:50.996 08:34:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:50.996 08:34:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:50.996 08:34:33 -- common/autotest_common.sh@10 -- # set +x 00:35:50.996 ************************************ 00:35:50.996 START TEST keyring_linux 00:35:50.996 ************************************ 00:35:50.996 08:34:33 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:50.996 Joined session keyring: 679240029 00:35:51.255 * Looking for test storage... 00:35:51.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:51.255 08:34:33 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:51.255 08:34:33 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:51.255 08:34:33 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:51.255 08:34:33 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:51.255 08:34:33 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:51.256 08:34:33 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:51.256 08:34:33 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:51.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.256 --rc genhtml_branch_coverage=1 00:35:51.256 --rc genhtml_function_coverage=1 00:35:51.256 --rc genhtml_legend=1 00:35:51.256 --rc geninfo_all_blocks=1 00:35:51.256 --rc geninfo_unexecuted_blocks=1 00:35:51.256 00:35:51.256 ' 00:35:51.256 08:34:33 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:51.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.256 --rc genhtml_branch_coverage=1 00:35:51.256 --rc genhtml_function_coverage=1 00:35:51.256 --rc genhtml_legend=1 00:35:51.256 --rc geninfo_all_blocks=1 00:35:51.256 --rc geninfo_unexecuted_blocks=1 00:35:51.256 00:35:51.256 ' 00:35:51.256 08:34:33 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:51.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.256 --rc genhtml_branch_coverage=1 00:35:51.256 --rc genhtml_function_coverage=1 00:35:51.256 --rc genhtml_legend=1 00:35:51.256 --rc geninfo_all_blocks=1 00:35:51.256 --rc geninfo_unexecuted_blocks=1 00:35:51.256 00:35:51.256 ' 00:35:51.256 08:34:33 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:51.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.256 --rc genhtml_branch_coverage=1 00:35:51.256 --rc genhtml_function_coverage=1 00:35:51.256 --rc genhtml_legend=1 00:35:51.256 --rc geninfo_all_blocks=1 00:35:51.256 --rc geninfo_unexecuted_blocks=1 00:35:51.256 00:35:51.256 ' 00:35:51.256 08:34:33 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:51.256 08:34:33 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:51.256 08:34:33 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.256 08:34:33 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.256 08:34:33 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.256 08:34:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.256 08:34:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.256 08:34:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.256 08:34:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:51.256 08:34:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:51.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:51.256 08:34:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:51.256 08:34:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:51.256 08:34:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:51.256 08:34:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:51.256 08:34:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:51.256 08:34:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:51.256 /tmp/:spdk-test:key0 00:35:51.256 08:34:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:51.256 08:34:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:51.256 08:34:33 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:51.516 08:34:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:51.516 08:34:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:51.516 /tmp/:spdk-test:key1 00:35:51.516 08:34:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1640394 00:35:51.516 08:34:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1640394 00:35:51.516 08:34:33 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:51.516 08:34:33 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1640394 ']' 00:35:51.516 08:34:33 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.516 08:34:33 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.516 08:34:33 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.516 08:34:33 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.516 08:34:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:51.516 [2024-11-28 08:34:33.583566] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:35:51.516 [2024-11-28 08:34:33.583616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640394 ] 00:35:51.516 [2024-11-28 08:34:33.646058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.516 [2024-11-28 08:34:33.688458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.775 08:34:33 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:51.775 08:34:33 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:51.775 08:34:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:51.775 08:34:33 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.775 08:34:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:51.775 [2024-11-28 08:34:33.906436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.775 null0 00:35:51.775 [2024-11-28 08:34:33.938500] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:51.775 [2024-11-28 08:34:33.938850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:51.775 08:34:33 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.775 08:34:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:51.775 584254561 00:35:51.775 08:34:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:51.775 202752759 00:35:51.775 08:34:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1640405 00:35:51.775 08:34:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1640405 /var/tmp/bperf.sock 00:35:51.775 08:34:33 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:51.775 08:34:33 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1640405 ']' 00:35:51.775 08:34:33 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:51.775 08:34:33 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.775 08:34:33 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:51.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:51.775 08:34:33 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.775 08:34:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:51.775 [2024-11-28 08:34:34.009666] Starting SPDK v25.01-pre git sha1 27aaaa748 / DPDK 24.03.0 initialization... 00:35:51.775 [2024-11-28 08:34:34.009709] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640405 ] 00:35:52.034 [2024-11-28 08:34:34.070583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.034 [2024-11-28 08:34:34.113307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.034 08:34:34 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.034 08:34:34 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:52.034 08:34:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:52.034 08:34:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:52.293 08:34:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:52.293 08:34:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:52.553 08:34:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:52.553 08:34:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:52.553 [2024-11-28 08:34:34.796329] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:52.812 nvme0n1 00:35:52.812 08:34:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:52.812 08:34:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:52.812 08:34:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:52.812 08:34:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:52.812 08:34:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:52.812 08:34:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.812 08:34:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:52.812 08:34:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:52.812 08:34:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:52.812 08:34:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:52.812 08:34:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.812 08:34:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.812 08:34:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:53.070 08:34:35 keyring_linux -- keyring/linux.sh@25 -- # sn=584254561 00:35:53.070 08:34:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:53.070 08:34:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:53.070 08:34:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 584254561 == \5\8\4\2\5\4\5\6\1 ]] 00:35:53.070 08:34:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 584254561 00:35:53.070 08:34:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:53.070 08:34:35 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:53.328 Running I/O for 1 seconds... 00:35:54.265 18799.00 IOPS, 73.43 MiB/s 00:35:54.265 Latency(us) 00:35:54.265 [2024-11-28T07:34:36.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.265 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:54.265 nvme0n1 : 1.01 18798.38 73.43 0.00 0.00 6783.36 4872.46 10314.80 00:35:54.265 [2024-11-28T07:34:36.534Z] =================================================================================================================== 00:35:54.265 [2024-11-28T07:34:36.534Z] Total : 18798.38 73.43 0.00 0.00 6783.36 4872.46 10314.80 00:35:54.265 { 00:35:54.265 "results": [ 00:35:54.265 { 00:35:54.265 "job": "nvme0n1", 00:35:54.265 "core_mask": "0x2", 00:35:54.265 "workload": "randread", 00:35:54.265 "status": "finished", 00:35:54.265 "queue_depth": 128, 00:35:54.265 "io_size": 4096, 00:35:54.265 "runtime": 1.006842, 00:35:54.265 "iops": 18798.3814739552, 00:35:54.265 "mibps": 73.4311776326375, 00:35:54.265 "io_failed": 0, 00:35:54.265 "io_timeout": 0, 00:35:54.265 "avg_latency_us": 6783.363882743998, 00:35:54.265 "min_latency_us": 4872.459130434782, 00:35:54.265 "max_latency_us": 10314.79652173913 00:35:54.265 } 00:35:54.265 ], 00:35:54.265 "core_count": 1 00:35:54.265 } 00:35:54.265 08:34:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:54.265 08:34:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:54.523 08:34:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:54.523 08:34:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:54.523 08:34:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:54.523 08:34:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:54.523 08:34:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:54.523 08:34:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:54.783 08:34:36 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:54.783 08:34:36 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:54.783 08:34:36 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:54.783 08:34:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:54.783 08:34:36 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:54.783 08:34:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:54.783 08:34:36 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:54.783 08:34:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:54.783 [2024-11-28 08:34:36.967809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:54.783 [2024-11-28 08:34:36.967865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1546fa0 (107): Transport endpoint is not connected 00:35:54.783 [2024-11-28 08:34:36.968860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1546fa0 (9): Bad file descriptor 00:35:54.783 [2024-11-28 08:34:36.969862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:54.783 [2024-11-28 08:34:36.969873] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:54.783 [2024-11-28 08:34:36.969880] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:54.783 [2024-11-28 08:34:36.969887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:54.783 request: 00:35:54.783 { 00:35:54.783 "name": "nvme0", 00:35:54.783 "trtype": "tcp", 00:35:54.783 "traddr": "127.0.0.1", 00:35:54.783 "adrfam": "ipv4", 00:35:54.783 "trsvcid": "4420", 00:35:54.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.783 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.783 "prchk_reftag": false, 00:35:54.783 "prchk_guard": false, 00:35:54.783 "hdgst": false, 00:35:54.783 "ddgst": false, 00:35:54.783 "psk": ":spdk-test:key1", 00:35:54.783 "allow_unrecognized_csi": false, 00:35:54.783 "method": "bdev_nvme_attach_controller", 00:35:54.783 "req_id": 1 00:35:54.783 } 00:35:54.783 Got JSON-RPC error response 00:35:54.783 response: 00:35:54.783 { 00:35:54.783 "code": -5, 00:35:54.783 "message": "Input/output error" 00:35:54.783 } 00:35:54.783 08:34:36 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:54.783 08:34:36 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:54.783 08:34:36 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:54.783 08:34:36 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@33 -- # sn=584254561 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 584254561 00:35:54.783 1 links removed 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@33 -- # sn=202752759 00:35:54.783 08:34:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 202752759 00:35:54.783 1 links removed 00:35:54.783 08:34:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1640405 00:35:54.783 08:34:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1640405 ']' 00:35:54.783 08:34:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1640405 00:35:54.783 08:34:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:54.783 08:34:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.783 08:34:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1640405 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1640405' 00:35:55.043 killing process with pid 1640405 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 1640405 00:35:55.043 Received shutdown signal, test time was about 1.000000 seconds 00:35:55.043 00:35:55.043 Latency(us) 00:35:55.043 [2024-11-28T07:34:37.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.043 [2024-11-28T07:34:37.312Z] =================================================================================================================== 00:35:55.043 [2024-11-28T07:34:37.312Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 1640405 00:35:55.043 08:34:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1640394 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1640394 ']' 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1640394 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1640394 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1640394' 00:35:55.043 killing process with pid 1640394 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 1640394 00:35:55.043 08:34:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 1640394 00:35:55.611 00:35:55.611 real 0m4.329s 00:35:55.611 user 0m8.059s 00:35:55.611 sys 0m1.441s 00:35:55.611 08:34:37 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.611 08:34:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:55.611 ************************************ 00:35:55.611 END TEST keyring_linux 00:35:55.611 ************************************ 00:35:55.611 08:34:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:55.611 08:34:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:55.611 08:34:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:55.611 08:34:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:55.611 08:34:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:55.611 08:34:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:55.611 08:34:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:55.611 08:34:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:55.611 08:34:37 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:55.611 08:34:37 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:55.611 08:34:37 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:55.611 08:34:37 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:55.611 08:34:37 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:55.611 08:34:37 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:55.611 08:34:37 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:55.611 08:34:37 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:55.611 08:34:37 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:55.611 08:34:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:55.611 08:34:37 -- common/autotest_common.sh@10 -- # set +x 00:35:55.611 08:34:37 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:55.611 08:34:37 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:55.611 08:34:37 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:55.611 08:34:37 -- common/autotest_common.sh@10 -- # set +x 00:35:59.804 INFO: APP EXITING 00:35:59.804 INFO: killing all VMs 00:35:59.804 INFO: killing vhost app 00:35:59.804 INFO: EXIT DONE 00:36:02.343 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:02.343 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:02.343 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:02.343 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:02.343 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:02.343 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:02.343 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:02.343 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:02.343 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:02.343 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:02.704 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:02.704 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:02.704 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:02.704 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:02.704 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:02.704 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:02.704 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:04.761 Cleaning 00:36:04.761 Removing: /var/run/dpdk/spdk0/config 00:36:04.761 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:04.761 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:05.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:05.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:05.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:05.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:05.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:05.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:05.020 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:05.020 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:05.020 Removing: /var/run/dpdk/spdk1/config 00:36:05.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:05.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:05.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:05.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:05.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:05.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:05.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:05.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:05.020 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:05.020 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:05.020 Removing: /var/run/dpdk/spdk2/config 00:36:05.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:05.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:05.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:05.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:05.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:05.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:05.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:05.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:05.020 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:05.020 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:05.020 Removing: /var/run/dpdk/spdk3/config 00:36:05.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:05.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:05.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:05.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:05.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:05.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:05.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:05.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:05.020 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:05.020 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:05.020 Removing: /var/run/dpdk/spdk4/config 00:36:05.020 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:05.020 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:05.020 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:05.020 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:05.020 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:05.020 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:05.020 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:05.020 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:05.020 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:05.020 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:05.020 Removing: /dev/shm/bdev_svc_trace.1 00:36:05.020 Removing: /dev/shm/nvmf_trace.0 00:36:05.020 Removing: /dev/shm/spdk_tgt_trace.pid1167515 00:36:05.020 Removing: /var/run/dpdk/spdk0 00:36:05.020 Removing: /var/run/dpdk/spdk1 00:36:05.020 Removing: /var/run/dpdk/spdk2 00:36:05.020 Removing: /var/run/dpdk/spdk3 00:36:05.020 Removing: /var/run/dpdk/spdk4 00:36:05.020 Removing: /var/run/dpdk/spdk_pid1037177 00:36:05.020 Removing: /var/run/dpdk/spdk_pid1165358 00:36:05.020 Removing: /var/run/dpdk/spdk_pid1166437 00:36:05.020 Removing: /var/run/dpdk/spdk_pid1167515 00:36:05.020 Removing: /var/run/dpdk/spdk_pid1168150 00:36:05.020 Removing: /var/run/dpdk/spdk_pid1169094 00:36:05.279 Removing: /var/run/dpdk/spdk_pid1169119 00:36:05.279 Removing: /var/run/dpdk/spdk_pid1170096 00:36:05.279 Removing: /var/run/dpdk/spdk_pid1170309 00:36:05.279 Removing: /var/run/dpdk/spdk_pid1170489 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1172182 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1173253 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1173694 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1173856 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1174124 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1174416 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1174670 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1174918 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1175198 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1175943 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1178938 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1179201 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1179418 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1179478 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1179930 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1179990 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1180481 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1180484 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1180754 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1180809 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1181017 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1181151 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1181593 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1181839 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1182144 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1185859 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1190114 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1200663 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1201402 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1205672 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1206005 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1210191 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1216068 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1218670 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1228870 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1237770 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1239500 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1240519 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1257685 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1261536 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1307376 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1312713 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1318471 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1324747 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1324749 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1325670 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1326580 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1327493 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1327963 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1327965 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1328204 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1328423 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1328430 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1329346 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1330185 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1330962 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1331643 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1331652 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1331880 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1332899 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1333902 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1342209 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1371128 00:36:05.280 Removing: /var/run/dpdk/spdk_pid1375622 00:36:05.538 Removing: /var/run/dpdk/spdk_pid1377228 00:36:05.538 Removing: /var/run/dpdk/spdk_pid1379186 00:36:05.538 Removing: /var/run/dpdk/spdk_pid1379223 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1379535 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1379864 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1380466 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1382181 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1383070 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1383450 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1385756 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1386081 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1386749 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1390804 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1396192 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1396194 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1396195 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1399953 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1408244 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1412128 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1418134 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1419427 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1420753 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1422087 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1427276 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1431468 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1435399 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1442615 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1442725 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1447257 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1447485 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1447711 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1448093 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1448181 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1452468 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1453013 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1457339 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1459952 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1465252 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1470388 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1479671 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1486427 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1486429 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1505024 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1505683 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1506158 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1506657 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1507383 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1507929 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1508541 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1509021 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1513161 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1513458 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1519491 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1519639 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1525219 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1529690 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1539432 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1540036 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1544074 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1544466 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1548552 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1554195 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1556779 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1566629 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1575749 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1577494 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1578426 00:36:05.539 Removing: /var/run/dpdk/spdk_pid1594092 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1597919 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1600763 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1608144 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1608284 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1613132 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1615100 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1617010 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1618109 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1620592 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1621756 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1630376 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1630838 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1631308 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1633566 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1634030 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1634524 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1638319 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1638323 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1639842 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1640394 00:36:05.798 Removing: /var/run/dpdk/spdk_pid1640405 00:36:05.798 Clean 00:36:05.798 08:34:47 -- common/autotest_common.sh@1453 -- # return 0 00:36:05.798 08:34:47 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:05.798 08:34:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:05.798 08:34:47 -- common/autotest_common.sh@10 -- # set +x 00:36:05.798 08:34:47 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:05.798 08:34:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:05.798 08:34:47 -- common/autotest_common.sh@10 -- # set +x 00:36:05.798 08:34:48 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:05.798 08:34:48 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:05.798 08:34:48 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:05.798 08:34:48 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:05.798 08:34:48 -- spdk/autotest.sh@398 -- # hostname 00:36:05.798 08:34:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:06.056 geninfo: WARNING: invalid characters removed from testname! 00:36:27.982 08:35:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:29.884 08:35:11 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:31.786 08:35:13 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:33.692 08:35:15 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:35.596 08:35:17 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:37.499 08:35:19 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:39.403 08:35:21 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:39.403 08:35:21 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:39.403 08:35:21 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:39.403 08:35:21 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:39.403 08:35:21 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:39.403 08:35:21 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:39.403 + [[ -n 1088450 ]] 00:36:39.403 + sudo kill 1088450 00:36:39.412 [Pipeline] } 00:36:39.428 [Pipeline] // stage 00:36:39.433 [Pipeline] } 00:36:39.447 [Pipeline] // timeout 00:36:39.454 [Pipeline] } 00:36:39.470 [Pipeline] // catchError 00:36:39.477 [Pipeline] } 00:36:39.493 [Pipeline] // wrap 00:36:39.500 [Pipeline] } 00:36:39.513 [Pipeline] // catchError 00:36:39.524 [Pipeline] stage 00:36:39.526 [Pipeline] { (Epilogue) 00:36:39.539 [Pipeline] catchError 00:36:39.542 [Pipeline] { 00:36:39.555 [Pipeline] echo 00:36:39.557 Cleanup processes 00:36:39.563 [Pipeline] sh 00:36:39.849 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:39.849 1650739 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:39.863 [Pipeline] sh 00:36:40.145 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:40.145 ++ grep -v 'sudo pgrep' 00:36:40.145 ++ awk '{print $1}' 00:36:40.145 + sudo kill -9 00:36:40.145 + true 00:36:40.158 [Pipeline] sh 00:36:40.445 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:52.673 [Pipeline] sh 00:36:52.955 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:52.955 Artifacts sizes are good 00:36:52.970 [Pipeline] archiveArtifacts 00:36:52.977 Archiving artifacts 00:36:53.107 [Pipeline] sh 00:36:53.384 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:53.397 [Pipeline] cleanWs 00:36:53.411 [WS-CLEANUP] Deleting project workspace... 00:36:53.411 [WS-CLEANUP] Deferred wipeout is used... 00:36:53.417 [WS-CLEANUP] done 00:36:53.419 [Pipeline] } 00:36:53.434 [Pipeline] // catchError 00:36:53.444 [Pipeline] sh 00:36:53.726 + logger -p user.info -t JENKINS-CI 00:36:53.734 [Pipeline] } 00:36:53.746 [Pipeline] // stage 00:36:53.750 [Pipeline] } 00:36:53.761 [Pipeline] // node 00:36:53.766 [Pipeline] End of Pipeline 00:36:53.886 Finished: SUCCESS